# Johannes Glückler Robert Panitz *Editors*

*Klaus Tschira Symposia* Knowledge and Space 19

# **Knowledge and Digital Technology**

## **Knowledge and Space**

Volume 19

**Series Editor** Johannes Glückler, Department of Geography, LMU Munich, Munich, Germany

#### **Knowledge and Space**

This series explores the relationship between geography and the creation, use, and reproduction of knowledge. The volumes cover a broad range of topics, including: clashes of knowledge; milieus of creativity; geographies of science; cultural memories; knowledge and the economy; learning organizations; knowledge and power; ethnic and cultural dimensions of knowledge; knowledge and action; mobilities of knowledge; knowledge and networks; knowledge and institutions; geographies of the university; geographies of schooling; knowledge for governance; space, place and educational settings; knowledge and civil society; and professions and profciency. These topics are analyzed and discussed by scholars from a range of disciplines, schools of thought, and academic cultures.

**Knowledge and Space** is the outcome of an agreement concluded by the Klaus Tschira foundation and Springer in 2006.

Johannes Glückler • Robert Panitz Editors

# Knowledge and Digital Technology

*Editors* Johannes Glückler Department of Geography LMU Munich Munich, Germany

Robert Panitz Institute of Management University of Koblenz Koblenz, Germany

ISSN 1877-9220 ISSN 2543-0580 (electronic) Knowledge and Space ISBN 978-3-031-39100-2 ISBN 978-3-031-39101-9 (eBook) https://doi.org/10.1007/978-3-031-39101-9

© The Editor(s) (if applicable) and The Author(s) 2024 . This book is an open access publication.

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specifc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Paper in this product is recyclable.

## **Acknowledgments**

The editors thank the Klaus Tschira Foundation for funding the symposia and the Springer book series on Knowledge and Space. The teams of the Klaus Tschira Foundation and the Studio Villa Bosch have been contributing greatly to the success of the symposia for more than a decade. Together with all the authors in this volume, we are grateful to Marius Zipf, Klara Jungkunz, and Linda Sendlinger for their superb assistance to the editors as well as to the technical editing team for their tireless dedication. Volker Schniepp at the Department of Geography at Heidelberg University has generously helped us to get fgures and maps into shape for publication. We also thank all student assistants and colleagues from the Department of Geography who have helped accomplish the symposium as well as this 18th volume in this book series. We are particularly grateful to Lena Buchner, Tobias Friedlaender, Johannes Nützel, Sandy Placzek, and Helen Sandbrink.

## **Contents**



## **Contributors**

**Luis F. Alvarez León** Department of Geography, Dartmouth College, Hanover, NH, USA

**Ryan Burns** Department of Geography, University of Calgary, Calgary, AB, Canada

**Jeremy Crampton** Department of Geography, George Washington University, Washington, DC, USA

**Zoltán Cséfalvay** Centre for Next Technological Futures, Mathias Corvinus Collegium, Budapest, Hungary

**Ido Erev** Faculty of Data and Decision Sciences, Technion Israel Institute of Technology, Haifa, Israel

**Nancy Ettlinger** Department of Geography, Ohio State University, Columbus, OH, USA

**Johannes Glückler** Department of Geography, LMU Munich, Munich, Germany

**Kôiti Hasida** Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan

**Lea Hennala** School of Engineering Science, Lappeenranta-Lahti University of Technology, Lahti, Finland

**Manal Hreib** Faculty of Data and Decision Sciences, Technion Israel Institute of Technology, Haifa, Israel

**Helinä Melkas** School of Engineering Science, Lappeenranta-Lahti University of Technology, Lahti, Finland

**Joachim Meyer** Department of Industrial Engineering, Tel Aviv University, Tel Aviv, Israel

**Nancy Odendaal** School of Architecture, Planning and Geomatics, University of Cape Town, Cape Town, South Africa

**Robert Panitz** Institute of Management, University of Koblenz, Koblenz, Germany

**Satu Pekkarinen** School of Engineering Science, Lappeenranta-Lahti University of Technology, Lahti, Finland

**Alison B. Powell** Department of Media and Communications, London School of Economics and Political Science, London, UK

**Felix G. Rebitschek** Harding Centre for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany

Max Planck Institute for Human Development, Berlin, Germany

**Kinneret Teodorescu** Faculty of Data and Decision Sciences, Technion Israel Institute of Technology, Haifa, Israel

**Andranik Tumasjan** Chair for Management and Digital Transformation, Johannes Gutenberg University Mainz, Mainz, Germany

## **Chapter 1 Introduction: Knowledge and Digital Technology**

**Robert Panitz and Johannes Glückler**

Development happens as a society undergoes structural transformation. Structural change in a society's culture, institutions, and technologies is driven by new ways of thinking, new knowledge, and innovations. Although the latest wave of technological change, often referred to as the ffth Kondratieff cycle (Schumpeter, 1961), has been transforming world society since the 1990s. Innovative uses of digital technology have continued to yield radical and disruptive changes. Digitization has been central to shaping new ways of *observing* (e.g., by collecting big data and augmenting reality), *knowing* (e.g., supported by machine learning), and *transforming* (e.g., by automation and robotics) our environment. As humanity uses its knowledge to advance technologies, which in turn have an effect on human knowledge and our ways of learning, we have dedicated this book to the refexive relationship between knowledge and technology. In addition, geography is an important, yet frequently neglected, context for the ways in which people and organizations generate new knowledge, how they adopt and use new technologies, and how the use of these technologies affects their knowledge. Coincidently, technological advances have an immediate impact on human knowledge of geography and space. Whereas people once used maps and compasses to fnd their way around, today GPS-based navigation services take over all the work, with the effect of gradually diminishing both human cognition of space (Yan et al., 2022) and spatial knowledge acquisition (Brügger, Richter, & Fabrikant, 2019). This 19th volume in the Springer Series of *Knowledge and Space* has brought together leading interdisciplinary expertise, new

R. Panitz

J. Glückler (\*) Department of Geography, LMU Munich, Munich, Germany e-mail: johannes.glueckler@lmu.de

© The Author(s) 2024

Institute of Management, University of Koblenz, Koblenz, Germany e-mail: panitz@uni-koblenz.de

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_1

empirical evidence, and conceptual propositions on the conditions, impact, and future potential of digital technologies for varying geographies of human society.

#### **Knowledge, Digital Technology, and Space**

While we were preparing this book for publication, another new technology knocked at the door of the academy—one promising to change practices not only in universities and education, but in social life more generally. The introduction of new generation technologies of artifcial intelligence (AI) and especially *large language models* such as ChatGPT has been challenging incumbent practices of collecting and condensing information in written works, as well as the evaluation of students' outputs aimed at the good reproduction of published knowledge. By the end of November 2022, OpenAI had started to offer public access to ChatGPT. It uses machine learning methods to generate text-based answers to user queries. Whereas computergenerated content was previously marked by an artifcial style and tone, the current version of ChatGPT produces text that is hard to distinguish from human-authored content. It has become increasingly diffcult to distinguish artifcial from natural intelligence in written texts.

When the frst cases of ChatGPT-generated student theses appeared, a discussion began over the legal and academic nature of these texts and whether they qualify as plagiarism. Whereas some view AI as a tool to help produce scientifc output, others reject it as impermissible. Because ChatGPT uses probability functions to generate texts, some researchers have expressed doubts about the technology's analytical power and reliability (Dwivedi et al., 2023; Else, 2023; Stokel-Walker, 2022, 2023; Stokel-Walker & Van Noorden, 2023; van Dis et al., 2023). Unsurprisingly, various academic journals have adopted diverging policies on how to handle AI-generated texts. While the journal *Science* banishes all articles that are based on AI-assisted tools (Brainard, 2023); *Elsevier* (Elsevier, 2023) and *Springer* (Brainard, 2023) allow such usage on condition that the authors disclose it. Because academic publishers consider AI unable to assume full authorial responsibility, they cannot treat AI as an author. Simultaneously, however, artifcial intelligence and Large Language Models (LLM) have created potential for new markets, business models, applications, and services. For example, market research departments have started to use this technology for sentiment analysis. Chatbots or virtual assistants are used for customer communication as well as translator apps and websites. Specialized services such as fraud detection or AI programming assistants are further real-world examples.

These latest developments have evoked a controversy around AI because of the lack of knowledge and uncertainty about the relationships between (i) knowledge and new digital technologies, (ii) digital technology and space, and (iii) digital technology, law, and ethics.

First, the relationship between such technologies and knowledge is refexive: Technology is the fruit of human creativity and knowledge, but at the same time changes how we learn and what we need and believe to know. Given the ubiquity of digital geodata and navigation services, for example, what proportion of people could still fnd their way through unfamiliar territory with only a printed map and a compass? And yet—is such a skill still relevant? Similarly, whereas motivational factors positively affect the adoption of technology, as the authors of technology acceptance studies have shown (Al-Emran & Granić, 2021; Escobar-Rodriguez & Monge-Lozano, 2012; Venkatesh & Bala, 2008), frequent use of cell phones and social media has been reported to negatively affect average student grades (Junco, 2012; Lepp, Barkley, & Karpinski, 2014). This leads to various questions on the *relationship between knowledge and technology*: How will advances in digital technology, such as machine learning, affect how we learn, what we know, and what we believe to be knowledge? How do participatory media change human learning (Martin & Ertzberger, 2013)? How do new sources and magnitudes of data and algorithms affect knowledge creation, and the corresponding processes of validation and interpretation? What kinds of knowledge become obsolete, and what types of new knowledge move to the foreground of human curiosity and exploration? What kinds of skills are needed in the digital age (van Laar, van Deursen, van Dijk, & de Haan, 2017)? These questions encapsulate the grounding interests of this book.

Second, in this book we seek to explore the *relationship between technology and space*. Digital technologies have been transforming the social and spatial relations of industries, markets, and societies. An example is the usage of knowledge management systems and software in most organizations. According to the mirroring hypotheses by Colfer and Baldwin (2016), organizational and communication relations coevolve with technical dependencies. The development of new digital tasks and new forms of digital divisions of labor re-shapes the economic system, its organizational networks, and the structure of societies as a whole (Acemoglu & Restrepo, 2019). The United Nations Conference on Trade and Development has acknowledged this by defning the digital economy as economies connected not only with digital core technologies such as computers, telecommunication, internet, or digital and information technology (IT) sectors, but also with "a wider set of digitalizing sectors" such as media, fnance, tourisms, etc. (UNCTAD, 2019, p. 5). At the same time, a nagging question has returned: What is the unique nature of human work that cannot be replaced by technological solutions, and how will technology endanger workplaces in the future (David, 2017; Frey & Osborne, 2017; Tuisku et al., 2019)? This question inspires a further one: How can human work and technology complement each other (Autor, 2015; Kong, Luo, Huang, & Yang, 2019)? Of course, qualifed human capital is a prerequisite for technological development (Bresnahan, Brynjolfsson, & Hitt, 2002), which also stimulates research on favorable organizational environments and ecosystems that, in turn, help to spark technological innovation. Researchers working in geographical traditions have deployed concepts of clusters, entrepreneurial ecosystems, and regional systems of innovation to study and support technological advance (Alvedalen & Boschma, 2017; Asheim, Cooke, & Martin, 2006; Bathelt, Malmberg, & Maskell, 2004; Braczyk, Cooke, & Heidenreich, 2004; Malecki, 2018; Porter, 2000; Stam, 2018; Uyarra & Flanagan, 2016). In this respect, this book pursues questions including: How do the digital and

physical worlds affect each other? What opportunities and constraints for the spatial relations of society arise from digital and remote interactions? How do digital technologies and business models affect the organization of the space economy? How does digital life disengage people with the environment? What is the environmental impact of massive digitization?

Third, academic research, theorizing, and technological development are subject to normative beliefs and ethical concerns. Differences in worldviews and paradigms, priorities and interests, methodologies and empirical focus, also shape our views on digital technology. With this volume, we aim to support dialogue among scholars from the social, natural, and engineering sciences by addressing select ethical problems of new technological applications from various perspectives, including data privacy, surveillance, inequalities, resource extraction, and technological determination. The use of a technology already implies ethical questions (Sharkey & Sharkey, 2012) because every new technology enables new forms of action and practices, which potentially divert from extant social institutions or formal regulations (Glückler, Suddaby, & Lenz, 2018) at the moment of insertion in a social context. Because AI acts in part autonomously and may operate according to encoded ethical standards (Hagendorff, 2020), a new wave of ethical debate has surged. Therefore, we here also discuss questions around the *relationship between the ethics, norms, and governance of technology,* including: To what extent can society routinize and trust in automated screening, fltering, and assessments based on algorithms and artifcial intelligence? What are the ethical challenges that arise with cognitive and human enhancement? What is the future of intellectual property rights in an age of digital ubiquity?

#### **Structure of the Book**

This volume comprises 13 original contributions by researchers of different disciplines, ranging from management and economics, computer science, sociology, and geography to psychology, architecture, and planning, as well as media and communication science. These contributions are organized into three parts, each in response to one of the three guiding questions about the relations of digital technology with knowledge, geography, and ethics outlined in the previous section.

Part I of this book focuses on the refexive relationship between *Technology, Learning, and Decision-Making*. Its authors demonstrate how digital technologies support decision-making and learning, while depending on human knowledge as a critical prerequisite for the development and productive use of these technologies.

In Chapter 2, Helinä Melkas, Satu Pekkarinen, and Lea Hennala address the refexivity of knowledge and technology in the context of health technologies and their adoption in elderly care. Care robots offer great potential for healthcare and welfare sectors, thanks to advancements such as improved safety features and cognitive capabilities. Yet a limiting factor is the lack of knowledge on how to effectively apply and interact with these robots. Melkas et al. (2024) inquire about knowledge as a key factor for the introduction, utilization, and assessment of care robots. To understand the process of orienting oneself to the use of care robots, the authors propose examining the co-creative processes involved in the introduction of technology, the process of familiarization, and the acquisition of new knowledge and skills. The processes and interactions between those providing orientation and those receiving it prove particularly critical for understanding the underlying learning processes. In this regard, actors at the societal level play an important role as providers of orientation knowledge.

Moving from human-machine interaction to the question of algorithmic (in-)dependence of human behavior in decision-making, Joachim Meyer (2024) argues in Chapter 3 that effective data-driven decision-making requires an understanding and modeling of human behavior. Such understanding provides valuable insights into different decision domains and eases evaluation of the available data, thus preventing decisions from being infuenced by systemic biases. This insight is particularly vital as the rise of artifcial intelligence and data science in decision support systems raises questions about humans' role in decision-making. By examining the analytical processes involved in data-based decision-making, Meyer reveals that human decisions are in fact involved at each step, starting from data preparation and the selection of algorithms to iterative analyses and the visualization and interpretation of results.

Whereas Joachim Meyer illustrates how technological solutions arrive at better decisions through human assistance, Felix Rebitschek (2024) explores how people can be supported to make informed decisions. In Chapter 4, he introduces fast-andfrugal decision trees as interpretable models that assist consumers in decisionmaking processes under uncertainty. These decision trees help consumers navigate complex information landscapes and evaluate accessible information to make informed decisions. Such tools are valuable in situations where fnding qualityassured, objectively required, and subjectively needed information is essential for consumers navigating through uncertain and complex decision environments, such as retail or news platforms. Rebitschek gives an overview of expert-driven decisiontree developments from a consumer research project and examines their impact on decision-making.

In Chapter 5, Nancy Ettlinger (2024) discusses how digital educational technology presents a signifcant promise and business opportunity that educational institutions and the edtech industry are increasingly choosing to adopt. However, the underlying pedagogy of datafying knowledge prioritizes skills while bypassing contextual and conceptual knowledge. As a result, it encourages a technocratic mindset that lacks emphasis on interpersonal connections, while also obscuring the impacts of these technological implementations, which depend on the acquired expertise of workers. As a result, she argues, the datafcation of knowledge contributes to growing social and data injustices, social tensions, and inequalities. Contrary to the assumption that disruptive digital technology has ushered in an entirely new pedagogy, Ettlinger demonstrates that this pedagogy has a history that foreshadows various wide-ranging problems related to non-relational thinking and a lack of criticality within the digital sciences and among their users.

In Part II, we explore the relationship between the *Spaces of Digital Entrepreneurship, Labor, and Civic Engagement*. Its contributions examine the benefts of geographical agglomeration for business scaleups, study the nature and impact of legal regimes on the development of digital markets, discuss the use of digital devices in mobilizing resources for social activism, investigate citizen responses to smart city interventions as well as their implications for political polarization, and highlight the relational spaces of digital labor and its global positioning.

Zoltán Cséfalvay (2024) recognizes the association of digital technology and innovation with the challenge of scaling up business models and entrepreneurial start-ups. As he argues in Chapter 6, digital solutions require a critical mass of customers and infrastructure to unlock their full market potential and value proposition. Entrepreneurial ecosystems and start-up environments are often described as geographical phenomena that foster the growth and scaling up of start-ups. Cséfalvay provides a critical review of this concept and sets out to analyse a comprehensive database of 12,500 scaleups—that is, start-ups that raised more than €1 million across the European regions and at a city level. He fnds a West-East and a North-South divide as well as a concentration of scaleup and funding activities in just a few European cities. In addition, he notes that university towns with locally available human capital contribute to some convergence. Nevertheless, he observes selfreinforcing scaleup ecosystems in only a few cities, whereas large cities in Southern, Central, and Eastern Europe tend to lag behind. Overall, he uses his detailed empirical analysis to offer plentiful evidence of both the benefts of geographical agglomeration in promoting technological entrepreneurship and scaleups and of the enormous spatial variation between cities in their ability to actually promote such technological innovativeness.

In Chapter 7, Luis F. Alvarez León (2024) shows how commercial actors have managed to privatize what public organizations had actually generated as free data by way of making only limited modifcations that are suffcient to claim copyright. Concretely, he examines the establishment of geographic information markets in the U.S. and focuses on the development of legal and technical interoperability in the collection and dissemination of geographic information, as well as the establishment of new intellectual property regimes. Alvarez León analyses the institutional confguration between the government, private frms, and the public in the United States. Within this context, the institutional confguration limits the government's ability to act as a producer of geographic information in the market. Data generated by the government is treated as public data with free usage rights, whereas products developed by private frms and individuals based on such public data become subject to property rights. This situation creates a conducive environment for the continuous production, consumption, circulation, and transformation of geographic information within a growing market. Recognizing the institutional, legal, and technical dimensions of the geographic information market, Alvarez León offers a better understanding and illustrative national example of the value production processes associated with geographic information and informational resources.

In Chapter 8, Nancy Odendaal (2024) illustrates the leverage effect of digital technologies on human action in physical space and vice versa. She offers insights

into how digital devices and solutions contribute to resource mobilization for social activism. When "thinking about cyborg activism," she refers to the concept of hybridity and how it characterizes digitally informed social action. She draws on the empirical case of South African cities during the COVID-19 pandemic, highlighting the ineffciencies of cities in addressing inequalities and social problems. In response, civil society organizations employed online and offine strategies to raise awareness, mobilize resources, and exert pressure on the government to effectively address urgent issues. She utilizes two empirical examples to illustrate the characteristics of these mobilization approaches, highlighting the synergy of technology, tactics, and storytelling that shape group efforts. Through the use of both digital and physical methods that establish a dynamic and responsive interaction between materials and individuals, activists participate in a dynamic interplay of resources and awareness across both private and public domains, encompassing emotions and level-headed political strategies, as well as rationality and fervor.

In Chapter 9, Alison Powell (2024) examines citizen action in response to "smart city" interventions in London during COVID-19 lockdowns aimed at improving air quality. Specifcally, she explores the experimental implementation of low-traffc neighbourhoods. She reveals that such responses to smart governance resulted in political polarization due to a lack of opportunities to express frictions or dissenting opinions. Through an analysis of posts from a Facebook group that generally opposes the introduction of data-driven low traffc zones, she makes clear that different emotions impact the perceived legitimacy of political actions. Faced with no avenues to express opposing views and feelings within a data-driven smart governance setting, individuals start to question and delegitimize government-collected data. Furthermore, they begin to generate their own vernacular evidence and form common identities. Thus, data frictions become intertwined with affective politics. In other words, if strong feelings are disregarded and not incorporated into the social validation process, a fertile ground for antagonism and animosity is born, potentially resulting in political polarization.

Conceptualizations of space impact our understanding of digital technologies. In Chapter 10, Ryan Burns (2024) argues for a relational understanding of digital work instead of an absolute conceptualization of space. Although researchers of digital labor have shed light on the relations, inequalities, and implications of productive capacities embedded in everyday activities, they have insuffciently addressed the spaces where this labor takes place. From a relational perspective, networks and connections constitute the positions and practices of actors and shape the space of digital labor. According to Burns, digital labor transcends national boundaries and specifc locations due to digital connectivity and interactions. With this relational perspective, he shifts the view of digital labor from a discrete, remunerated act to immaterial, cognitive, attentional, and symbolic labor.

Part III comprises a set of chapters that discuss some of the controversial issues regarding the *Ethics, Norms, and Governance of Technology.* Together, they show how those creating new forms of design and governance of digital technologies can potentially respect norms and ethics around data privacy, individual autonomy, and social inclusion. They provide insights into a variety of governance modes that are associated with digital data, technology, and trade. These modes reach from centralized to decentralized structures, from market to state driven, and from rule enforcement in centralized AI access to big data to improve privacy protection to the withdrawal of personal data from centralized access via personal data repositories.

In Chapter 11, Andranik Tumasjan (2024) focuses on the rise of decentralized business models, marketplaces, and organizations based on blockchain technology. Given the confusion surrounding the meaning of "decentralized" in the context of blockchain technology and business models, as well as the technology's unclear implications for mass customers, Tumasjan discusses the notion of decentralization in blockchain-based decentralized business models. He offers a two-dimensional framework to explain decentralization in such contexts. Building on this typology, he assesses the implications, prerequisites, and desirability of decentralization for the adoption of blockchain-based decentralized business models.

The collection and concentration of personal data by the state is also a contested issue, as it enables the potential of the state for massive surveillance and the erosion of privacy. In Chapter 12, Ido Erev et al. (2024) argue that although digital control and observation of human behavior are common issues in modern societies, the enforcement of rules and laws based on such observations often proves ineffective in preventing involuntary and illegal acts. Moreover, the notion of a highly effective digital system based on big data and artifcial intelligence that supports state authority often goes hand in hand with fears of excessive surveillance. In response, the authors propose that the utilization of big data, artifcial intelligence, and even simple reactive technology can reduce the need for severe and costly punishments. Instead, just as an irritating sound reminds car drivers to fasten their security seat belt, immediate technological intervention offers the potential, when cleverly designed, to sanction undesired behavior and enforce existing rules in a gentle manner, while preserving privacy.

This discussion is carried further by Chapter 13, whose author takes a critical look at the risks associated with digital technologies for massive accumulation, storage, and extraction of digital personal data. Kôiti Hasida (2024) argues that current systems primarily handle personal data through centralized artifcial intelligence and centralized data management. However, such centralized system architectures, along with related regulations, impose usage restrictions on personal data within these systems. But only the individuals who are data subjects have full legal rights to their private data. As a solution, Hasida proposes a decentralized management of personal data, introducing the concept of a personal life repository as a software library that enables decentralized data management. This decentralized approach would offer interfaces for various use cases and incorporate personal artifcial intelligence, thereby maximizing the value of personal data. Hasida demonstrates how a personal data repository would support the decentralized management of private data for billions of individuals at a remarkably low cost. Simultaneously, it would ensure high security and privacy, facilitating the development of private AI and graph documents. In essence, this contribution provides insights into a system that enables decentralized governance of private data.

In the fnal Chapter 14, Jeremy Crampton (2024) switches perspectives on new unregulated markets and business models that are growth driven and focused on value extraction. He discusses how current digital business models and processes create digital geographies that generate value through the emergence of new markets. He focuses on the digital geographies of geofences and cryptocurrencies, highlighting their criticized aspects in relation to their toxic characteristics that promote unsustainable growth and value extraction. Drawing on inspirations from the slow food movement and the ethics of slowness, Crampton introduces the concept of a "slow data economy," along with six underlying principles. He aims these principles at fostering alternative, responsible innovation and business models that prioritize the creation of social value instead of the privatization and extraction of value. The fundamental idea behind the slow data economy is to shift investment focus "from growth and extraction to care and repair".

#### **Conclusion**

This 19th volume of the *Knowledge and Space* series has collected international and interdisciplinary expertise around the nexus of knowledge, space, and digital technologies. Since the launch of the commercial internet in the early 1990s, digital technologies have led to the creation of new work practices, occupations, industries, and markets. Digitization has also deeply impacted place and space, including new spatial divisions of labor, the globalization of media, business, and trade, and the interrelations between the physical and the digital in synchronous as well as asynchronous communication and interaction. With generalized artifcial intelligence, robotic automation, blockchain technology, etc. a new wave of disruptive transformations is looming. Without claiming to offer a comprehensive or complete analysis of these issues, we present original views, concepts, and empirical evidence that shed light on the interdependence of these new technologies with human knowledge, social norms and ethics, and geographical space.

Through their analyses, this book's authors will demonstrate that, at least for now, technology and human knowledge are inherently interdependent. Although AI algorithms guide our decision-making, they are still founded on human assumptions and decisions. And although robots can technically take over part of human work, the legitimacy and, accordingly, helpfulness of their contributions depends on the social institution. This dependence on social and institutional contexts also points to the role of geography and space in the evolution of digital technologies. The contributors illustrate how strongly technological entrepreneurship and advances beneft from spatial agglomeration in key cities and regions, and how spatial variation in institutional contexts—including spatially bound regulations, social institutions, and organizational felds—shape diverse geographies of technology.

Readers will also fnd critical assessments of the ethical risks and social injustice emanating from digital technologies when, for example, reducing education to datafcation. Conversely, they will learn that digital technology can actually endorse ethical norms, for example by preserving privacy and autonomy over personal data. However, different forms of regulation and governance modes infuence the usage and the design of new digital technologies. Decentralized technological solutions, such as blockchains, often run up against centralized state structures, and product developers can use public data to create private goods that are commercially traded on digital markets. Although digital technologies have the potential to produce common goods, e.g. free data for the sake of all, whether and how these virtues are actually unleashed remains dependent on legal regimes and regulations.

#### **References**


**Robert Panitz** is Junior Professor in Technology and Innovation Management at the University of Koblenz, Germany. He serves as a guest lecturer at the Heidelberg Center Latin America in Chile. In 2020, he held an Interim Professor position at the University of Bremen. From 2010 to 2023, he worked at Heidelberg University in the feld of Economic Geography. He is a member of the German Society for Social Networks (DGNet). His research focuses on social and organizational structures and processes that foster innovation and technology development, as well as the economic and social effects of such advancements. Methodologically, he has a particular interest in social network analysis.

**Johannes Glückler** is Professor and Chair of Economic Geographies of the Future at LMU Munich, Germany. Previously, he was a professor of economic and social geography at Heidelberg University between 2008 and 2023. In his research he develops a relational perspective of social networks, institutions, and governance in the study of the geography of knowledge, innovation, and regional development. He is a founding board member of the German Society for Social Networks (DGNet) and co-founder of the M.Sc. Governance of Risk and Resources at the Heidelberg Center for Latin America in Santiago de Chile.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Part I Technology, Learning, and Decision-Making**

## **Chapter 2 Orientational Knowledge in the Adoption and Use of Robots in Care Services**

**Helinä Melkas, Satu Pekkarinen, and Lea Hennala**

Elderly care faces a gigantic shift in technology. Health and welfare technology are expected to help people live independent and healthy lives with retained integrity (Kapadia, Ariani, Li, & Ray, 2015). They are also expected to contribute to the effectiveness and effciency of elderly care and meeting individual needs (Malanowski, 2008). The demographic challenge of the ageing population also means that fewer people are working. Health and welfare technology could play a signifcant role in supporting care professionals. The elderly care sector is undergoing structural transformation, and the introduction of health and welfare technology has clear potential to contribute to its development. In many countries, scenarios for elderly care with severe staff shortages and cutdowns are already a reality. One way to drive improvements is to focus on the intersection of the two phenomena—the transformation caused by the shift in technology and the demographic challenge and the potential they create (Niemelä et al., 2021). Robots have gained more cognitive functions and improved safety, which makes it possible to use them to provide new types of services, including in elderly care (Holland et al., 2021; Preum et al., 2021). The European Union has also advanced the use of robots in providing care services. Yet despite care robots' potential to advance health and welfare, the centrality of ethical, social, and legal issues hampers application (e.g., Seibt, Hakli, & Nørskov, 2014; Melkas, Hennala, Pekkarinen, & Kyrki, 2020b), requiring changes at individual, service, and societal levels, and their interfaces.

A lack of knowledge is a big challenge in the use of robots in care (e.g., Johansson-Pajala et al., 2020). Johansson-Pajala et al. (2020) investigated various stakeholders (older adults, relatives, professional caregivers, and care service managers) and found that many lack knowledge of general matters, such as what a care robot is, what it can

H. Melkas (\*) · S. Pekkarinen · L. Hennala

School of Engineering Science, Lappeenranta-Lahti University of Technology, Lahti, Finland e-mail: Helina.Melkas@lut.f

<sup>©</sup> The Author(s) 2024

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_2

do, and what is available on the market. Detailed information is also needed concerning care robots' benefts for individuals' specifc needs (Johansson-Pajala et al., 2020). Those introducing, using, and assessing care robots must therefore give priority to a nuanced understanding of knowledge. In this chapter, we present a compilation of our recent micro-, meso-, and macro-level studies on care robots and elaborate on the relation between robot technology and knowledge, proposing a focus on *orientation to care robot use* as a continuous co-creative process of introduction to technology use and its familiarization, including learning of multi-faceted knowledge and skills for its effective use (see also Johansson-Pajala et al., 2020; Melkas et al., 2020a). This perspective can be regarded as complementing existing technology acceptance and diffusion models [e.g., Technology Acceptance Model (TAM): Davis, 1986, 1989; Theory of Reasoned Action (TRA): Fishbein & Ajzen, 1975; Diffusion of Innovations (DIT): Rogers, 2003; Unifed Theory of Acceptance and Use of Technology (UTAUT): Venkatesh, Morris, Davis, & Davis, 2003], whose creators have focused on different stages of technology adoption, familiarity with technology, use intention, adoption, and post-adoption (Khaksar, Khoslar, Singaraju, & Slade, 2021). We also focus on the *process of how* the adoption, acceptance, and meaningful use of care robots can be facilitated with the help of knowledge.

We base our approach on the view that new ways should be created for increasing knowledge related to care robot use, taking into account the needs of older customers, their relatives, caregivers, and care service organizations. They must not overlook societal-level actors, including business and industry, public administration and the non-proft sector, the media, and other stakeholders in the related innovation ecosystem (Pekkarinen, Tuisku, Hennala, & Melkas, 2019). We focus our research synopsis on the micro-, meso-, and macro levels related to care robot use, aiming also at unveiling a more systemic view of its related knowledge. On the basis of multi-level robot studies and a long background in welfare technology research, we propose shifting the focus from mere training—provision of information—to a more comprehensive understanding of processes and actions towards knowledge building in this area. The transformation caused by the shift in technology requires such novel understanding as a prerequisite for reaping the benefts of care robot use.

#### **Background**

Researchers have defned care robots as partly or fully autonomous machines that perform care-related activities for people with physical and/or mental disabilities related to age and/or health restrictions (Goeldner, Herstatt, & Tietze, 2015). These robots may simplify the daily activities of older adults and/or people with disabilities or improve their quality of life by enhancing their autonomy (Herstatt, Kohlbacher, & Bauer, 2011) and providing protection (Goeldner et al., 2015). Wu, Fassert, and Rigaud (2012) categorized care robots as monitoring robots (helping to observe health behaviours), assistive robots (offering support for older adults and their caregivers in daily tasks), and socially assistive robots (providing

companionship). Care robots may assist, for example, assistant nurses in their daily tasks (Melkas et al., 2020b). Cresswell, Cunningham-Burley, and Sheikh (2018) presented another categorization of care robots, including service robots (e.g., stock control, cleaning, delivery, sterilization), surgical robots, telepresence robots (e.g., screens on wheels), companion robots, cognitive therapy robots, robotic limbs and exoskeletons, and humanoids. Niemelä et al. (2021) categorized robotic applications and services according to their use contexts and purposes.

Researchers express doubts about the technological readiness of care robots and the lack of concrete usage scenarios for everyday nursing practice (Maibaum, Bischof, Hergesell, & Lipp, 2021). Several challenges exist concerning the organizational culture, practice, and structure of care robots, hence leading to problems with integration (Arentshorst & Peine, 2018; see also Pekkarinen et al., 2020) when efforts are made to use more of them. In general, the acceptance and impacts of digital technologies on customers in elderly care and personnel affect the possibilities of embedding technological innovations into care (e.g., Goeldner et al., 2015; Melkas et al., 2020b). The way in which older customers are involved in the emerging area of care robot use may be essential for their wellbeing and opportunities to learn technology and participate in society throughout the different stages of later life. Despite the recognition that technical aids could promote, sustain, and improve the wellbeing of older people (e.g., Herstatt et al., 2011; Kanoh et al., 2011), usable indicators for good solutions are lacking (Taipale, 2014).

Researchers have previously shown that implementers could have eliminated or relieved most of the negative effects of welfare technology use by means of good orientation, based on foresight information and assessment (Raappana, Rauma, & Melkas, 2007). Users lacking an appropriate level of skills and knowledge struggle with feelings of insuffciency and incapacity, easily leading to lowered motivation and distress. These may mitigate the intended impacts on wellbeing. The most signifcant factor related to the introduction of technology that motivates an individual is the beneft they get from its use. The different impacts of technology use are often indirect and diffcult to identify (Melkas et al., 2020b). Each person's skill level differs, and a technical device in care is not born and used in a vacuum: Behind the technology there stands a user with their own values; the living (or working) environment; and related service activities (Melkas, 2011). Technologies are still typically brought into care services as separate "islands," and the systemic view is missing (Pekkarinen et al., 2020).

Regarding the relationship between knowledge and technology, Jones III (2017) conducted a systematic review on knowledge sharing and technological innovation management and found that three factors are paramount to knowledge sharing: (a) trust, (b) technological training, and (c) good communication. Managers should focus on implementing practices with which they can emphasize these factors in their teams and/or organizations. Teo, Wang, Wei, Sia, and Lee (2006, p. 276) found that for technology assimilation, organizational learning is important in leveraging technological advantages and developing "learning capacities to increase a team's ability to understand and leverage new technologies." Training is important in understanding technologies and sharing knowledge and insights about a technology

within a team or organization. Seufert, Guggemos, and Sailer (2021) specifed the concept of technology-related knowledge, skills, and attitudes (KSA). Although they focused on teachers, these points are not likely to depend on the profession but are more generally connected to the relationship between knowledge and technology at the micro- (and perhaps also the meso-) level. The creators of the will, skill, and tool model also imply that attitudes are predictors of the actual use of technology (Knezek & Christensen, 2016).

Researchers have devoted far less attention to the relationship between knowledge and technology at the societal level (understood in this chapter as the macro level), especially from a human-oriented perspective. Considering the specifc type of technology—robots—the term "robot knowledge" or "robotics knowledge," for example, has gained quite technical interpretations. Suto and Sakamoto (2014) defned "robot literacy" as the ability to have appropriate relationships with intelligent robots, a kind of media literacy because robots can transmit the designers' intentions to the users. Our research approach is broader, including what could be called "societal robot literacy" (societal awareness raising; Pekkarinen et al., 2020).

In this research synopsis, we focus on the relationship between knowledge and robot technology at the micro, meso, and macro levels from the perspective of end users (older persons living in their homes or in assisted living settings and their relatives), care service personnel and organizations, and society. As end users, older people using technology are often viewed stereotypically or represented by assumptions or static identities without cultural and historical constructions (Östlund, Olander, Jonsson, & Frennert, 2015). In this narrow portrayal, old age is strongly related to illness, frailty, lost competences, and costly care. When such images underlie innovation processes, the resulting technology design—for example, of care robots—may implicitly or explicitly position older users only as frail, ill, or in need of care (Neven, 2010), reinforcing the stereotypical and homogenous sociocultural imagery of older people, translated into key design decisions (Oudshoorn, Neven, & Stienstra, 2016). When designers incorporate user diversity at all, they have most often focused only on age and gender differences (Flandorfer, 2012).

Moreover, an imbalance often exists between perceptions of older people's technology needs and knowledge about their actual needs. According to Östlund et al. (2015), the role of older people in digital agendas may simply be to legitimize development for fctive users rather than real ones. Old age is seen as a homogeneous stage in life, yet it covers decades and includes several phases. Society needs a paradigm shift and proactive technology that meets the real needs and demands of actual older people today (see Östlund et al., 2015; Gustafsson, 2015). The structure of elderly care also diverges from some other service processes: Not only is the client involved, but informal caregivers, such as relatives, often provide an essential part of the care (Johansson-Pajala et al., 2020).

From the point of view of work life, workers with low technology skills, in particular, face challenges in the new social and physical environment characterized partly by robots. They have a central role to play in listening to older customers' needs, guiding them, and promoting their wellbeing (Tuisku et al., 2022). Technology implementation requires changes in work practices and collaboration among

organizations, as well as in the knowledge and skill levels of personnel. Because organizational decision-makers do not commonly consider technology and care services as connected, the introduction of technologies such as care robots may lead to fatigue, loss of work motivation, additional costs, unwillingness to use the technology, and a decrease of well-being at work, sometimes even resulting in the premature loss of the experience and professional skills of older workers (e.g., Venkatesh & Davis 2000; Brougham & Haar, 2018). Yet professional caregivers have highly valued the introduction of technology into elderly care. According to Gustafsson (2015), in dementia care—which is considered "low-tech" care—professional caregivers consider it highly valuable for older people to be part of technology development. Caregivers suggest that not excluding older people with dementia but offering them technology support for increased wellbeing is an important ethical aspect.

Importantly, we consider knowledge about care robot technology essential for decision-makers and a variety of other societal stakeholders. New technologies, such as care robots, contribute to broader societal changes, involving constant "negotiations" with user preferences and thinking models, policies, infrastructures, markets, and science (Pekkarinen & Melkas, 2019; Akrich, Callon, Latour, & Monaghan, 2002; Geels, 2004). This makes innovation in structures, mindsets, and practices that involve stakeholders from different sectors, domains, and levels important (Loorbach, van Bakel, Whiteman, & Rotmans, 2010).

We thus propose focusing on knowledge as a key issue for care robot use. We wish to contribute to fnding appropriate and effective forms of increasing knowledge, and to providing practical, user-centered learning to promote inclusive technology implementation and use. Although the role of knowledge in different contexts becomes more important with increasing digitalization, researchers of knowledge and technology use have often worked quite generally, or only at one or (at most) two levels (of the micro, meso, and macro). They seem to have largely overlooked practical knowledge-building efforts in care robot-related research, even though earlier researchers identifed various obstacles to acceptance of care robots and shortcomings in their use. Sharkey and Sharkey (2012), for example, noted that the use of robots in elderly care brings various ethical problems: the loss of human contact; the feeling of objectivation; a loss of control, privacy, and liberty; deception and infantilization; and the question of whether older people should be allowed to control the robot. Customers are largely on their own, especially if they "age in place" and have not moved into institutional living. Their relatives may also feel ignorant and helpless in the face of the jungle of various technologies, wondering what is suitable and for what purposes (see Johansson-Pajala et al., 2020). The novelty of care robots exacerbates these problems. Producers of appliances and systems often organize initial training for care organizations, but such training is provided by trainers who do not work in the care sector, and the specifc needs of an individual care organization—let alone an individual employee—are rarely taken into account (Melkas, 2013).

The variety of concepts related in one way or another to knowledge and technology may obscure the essentials. The concepts of acceptance, adoption, assimilation, or introduction, familiarization, domestication, and embedding may be well-known, but the existence of multiple terms may blur the overall picture. By contrast, *training* is very commonly used*.* Questions remain: How much and what kind of training is needed, and for whom? However, we focus this research synopsis on a broader matter—the advancement of an increasingly systemic and multi-level perspective on knowledge building—with which we expand the relatively narrow focus of training towards a more comprehensive and interactive *process* and *action* focus.

#### **Methods and Materials**

In this chapter, we present a synopsis of our recent research on care robot use published since 2019, referring to individual research contributions and fndings where appropriate. We carried out this research as part of the ROSE and ORIENT projects, which we implemented together with colleagues from other Finnish universities, Sweden, and Germany. ORIENT ("Use of Care Robots in Welfare Services: New Models for Effective Orientation, 2018–2020") was an international research project that belonged to the JPI "More Years, Better Lives," centered on the use of care robots in welfare services for older adults. Within ORIENT, we studied how robots should be introduced, how to plan their use, what kind of support and information the various stakeholders need, and how these can be taken care of. We also linked our research to the framework of sociotechnical transition, whereby new technologies are seen as contributing to broader societal changes. ROSE ("Robots and the Future of Welfare Services") was a 6-year multidisciplinary research project funded by the Strategic Research Council (SRC) established within the Academy of Finland. The project's objective was to study the current and expected technical opportunities and applications of robotics in welfare services, particularly in care services for older people. We conducted our research at three levels: individual (micro), organizational (meso), and societal (macro).

In the feld studies, surveys, and interview studies that we have carried out in recent years, we have focused on gaining understanding of end users'—older adults, their relatives, and care professionals alike—needs, perceptions, and experiences of robots in care, and various challenges faced when taking robots into use or raising awareness about their potential. In other studies, we have focused on gaining an understanding of organizational and societal levels. Several of our studies were connected to the long-term actual implementation of robots in authentic care or related environments. The fndings from these studies are thus often based on the participants' frst-hand experience of robots in their everyday lives and work in the context of care for older people. We utilize our theoretical background to draw on inputs from innovation research, inter alia.

#### **Knowledge-Related Needs at Different Levels**

#### *Micro- and Meso-level Studies*

#### **Implementation of a humanoid robot in public elderly care services**

The Zora robot is a 57 centimetre-tall humanoid-type care robot (see Fig. 2.1). It can be used for rehabilitation and recreational assistance with exercise; it can also play music, perform dances, tell stories, and play interactive memory and guessing games. Softbank Robotics produces this Nao-type robot with software developed for application in the healthcare feld.1 In regards to elderly care, Huisman and Kort (2019) and Kort and Huisman (2017) have concluded from studies conducted in long-term facilities that the Zora robot can positively infuence both clients and staff. They found the potential for offering alternative means of pleasure and entertainment and rehabilitation for older clients, but the long-term care facilities are still exploring the most suitable target groups for Zora use (Kort & Huisman, 2017). Researchers studying acceptance and attitudes towards care robots have often used only pictures or audio-video material to, for example, elicit respondents' opinions of care robots (van Aerschot & Parviainen, 2020). When actual care robots are used

<sup>1</sup>For more detailed information, see www.zorarobotics.be

in research settings, researchers have mainly conducted short-term trials and pilot projects (Andtfolk, Nyholm, Eide, & Fagerström, 2021). We conducted longitudinal multi-perspective research on the implementation of Zora in 2015*–*2019. Our research consisted of a feld study of the implementation phase and follow-up interviews after three years of use of the frst Zora utilized for public elderly care services in Finland.2

From our feld study results in the implementation phase (Melkas et al., 2020b), we concluded that the robot's presence stimulated the clients to exercise and interact. The care workers perceived the clients' well-being as both a motivation to learn how to use robots and a justifcation for negative views. The robot's use was associated with multiple impacts with positive, negative, and neutral dimensions. These included impacts on interaction, physical activity, emotional and sensory experiences, self-esteem and dignity, and service received for clients; and impacts on the work atmosphere, meaningfulness of work content, workload, professional development, competences, and experience of work ethics for care personnel. Impacts on care personnel were related, for example, to the need for orientation, problems with time usage, and overall attitudes towards the novelty and renewing of care service. The caregivers highlighted the importance of knowing the clients and their needs well in advance when planning to use the robot. They emphasized that ample time for training and orientation for all personnel was needed. Orientation (referring to training and learning) related to care robots should comprise not only an explanation of technical issues, but also cover issues related to time usage and task division. The managers also recognized the need for orientation, a major issue that requires emphasis and skillful handling: "I asked the importer to give training when I saw the fear, distress, and diffdence about the robot" (an instructor).

The use of the Zora robot affected the integrity of the entire workplace community in our study, as there were some tensions between robot users and non-users, and between "puttering about robot use" (as others perceived it) and "real care work." Many of the identifed impacts were related to how the robot ft into the service processes. Workfow integration was challenging. Thus, although Zora has the potential to be part of care services and multifaceted rehabilitative functions, the need for careful systemic planning became clear. The robot's use must be well planned, with an understanding that the robot's usefulness varies and may increase over time. Realizing a robot's full potential may depend on providing staff with a proper orientation, usage time, and clear motives for use. Organizational leadership commitment may increase benefts for the clients and personnel in the establishment phase (e.g., from the viewpoint of meaningfulness of work). However, such

<sup>2</sup>The data on the implementation phase consisted of semi-participatory observation (27 sessions), focus group interviews of care workers, clients and social and healthcare students, and individual interviews of the management (49 interviews), as well as comments in the public media from January to April 2016. We further conducted seven follow-up interviews (care personnel from three units and managers) in the spring of 2019. We analyzed the data using the qualitative human impact assessment approach (Melkas, 2011) to identify the impacts of care-robot implementation on users, that is, care personnel and older clients.

benefts may remain negligible if the use is not well planned and led. An inadequate understanding of the purpose and meaningful tasks of the robot may lead to unrealistic expectations and unmet needs (Melkas et al., 2020b).

By thus studying the implementation phase, we unearthed the tricky relationship between knowledge and robot technology at the micro and meso levels. The impacts on care personnel were closely and in multiple ways related to knowledge-building needs, such as knowing about the device and its purpose and meaningful use for different kinds of clients; the workplace community's knowledge building about personnel's needs, time usage and task divisions; and addressing possible fears. We also reached insights into knowledge and clients. Clients should not be misled; the role of ethics is of key importance; and it is essential for the care personnel to explain to the clients what the robot is doing throughout the sessions, how clients can address and interact with it, and the role of the robot operator. As one caregiver said: "Elderly clients are grown-ups, even if they suffer from memory diseases. They are not stupid. The operator of the robot should tell them what is done and why."

Moreover, we studied the implementation phase using media analysis. Tuisku, Pekkarinen, Hennala, & Melkas (2019) examined the publicity surrounding the implementation of Zora. The aim was to discover opinions concerning the use of robots in elderly care as well as the arguments and justifcations behind them. As the frst Zora implementation in Finland in public elderly care services, the robot received much publicity, both regionally and nationally. From comments collected from online and print media, analyzed by means of interpretative content analysis, we learned that public opinion was mainly negative, but that the commentators apparently had little information about the robot and its tasks. There is clearly a need for more knowledge at the societal level for a better-informed discussion of how robots can be used in elderly care. Knowledge is also needed on how to involve the general public in this discussion in a constructive way.

Through our study on the long-term use of Zora (Pekkarinen, Hennala, & Tuisku, forthcoming), we showed that even though the care workers felt that the robot was a nice robotic "messenger" and that it brought new and interesting challenges to their work and recreation for clients, the robot-assisted service was not truly embedded in the daily services of the care units. This is due to factors such as changes in the organizational structures, and changes in personnel and tasks, which led to shortcomings in the provision of information and processes related to long-term robot use.

#### **Exoskeleton trials**

Wearable exoskeletons are increasingly being used in physically demanding jobs to support good ergonomics and augment muscular strength. Little is known about nurses' willingness and ability to use exoskeletons. Laevo Exoskeleton (see Fig. 2.2) is a wearable back support vest that, according to the manufacturer, alleviates lower back strain by 40–50%. Exoskeleton trials reported by Turja et al. (2020) were conducted during 2019 and 2020. Despite the low-tech nature of the equipment (see Fig. 2.2), researchers need trials to investigate the opportunities wearable technology **Fig. 2.2** Laevo. Source: Photo by Päivi Tommola. Reprinted with permission

provides for making care work physically less demanding. We tested Laevo exoskeletons in authentic care homes and home care environments in Finland. In the qualitative analysis, which we have summarized here, we investigated the social environment's impact on the intention to use exoskeletons.

Care workers (n = 8) used the exoskeleton individually for some days, up to 1 week. The participants were interviewed before and after the trial period, and they kept a diary on their use of the exoskeleton. In the pre-interviews, most nurses expected exoskeleton use to arouse interest and curiosity among patients and their relatives. Some thought the exoskeleton could cause aversion, especially if the nurses themselves expressed negative attitudes towards the exoskeleton or were unable to respond to questions about it. However, some suspected that the exoskeleton would not even draw the patients' attention, especially of those who suffered from memory disorders. These predictions proved to be quite accurate. The nurses reported that some patients assigned fairly negative attributions to the exoskeleton, such as calling it "a mess." This may be because the nurses' appearance while wearing the exoskeleton came across as clumsy and awkward. In post-interviews, the nurses revealed that the patients showed compassion towards those who "had to" use the exoskeleton.

In the pre-interviews, the nurses assumed that their colleagues would have quite mixed views about the exoskeletons. They expected that some colleagues would have a very negative opinion, merely because they did not know enough about the exoskeleton's usefulness. Some nurses anticipated that the trial period might cause colleagues to either ridicule the device or express interest in trying it out. Although the post-interviews supported these presumptions, the nurses also expressed that their colleagues questioned the exoskeleton's weight and pleasantness. The colleagues presumed that the discomfort would decrease the intention to use the exoskeleton, but the nurses themselves expressed being motivated to use it primarily because it would improve their ergonomics, and how this promise of positive health benefts would outweigh any possible drawbacks. We concluded that besides the functional characteristics of the device, many aspects of human-centered care work have to be taken into consideration when implementing exoskeletons in the care context. This indicates that new technology must be compatible with the ethical and social norms of care work (Turja et al., 2020).

As a result of the trials, the nurses did not believe that their colleagues or patients would much oppose use of the exoskeletons. They also thought that managers would be supportive. It is important to design new technologies and work methods together with professionals, utilizing their knowledge. Specifc characteristics of geriatric care work either enhance or hinder the implementation of this new technology. The specifc professional context and the cultural context of exoskeleton acceptance need to be emphasized. For example, ease of use has typically played a strong role in predicting intention to use technology (Heerink, Kröse, Evers, & Wielinga, 2010), but this did not appear as a prerequisite for accepting exoskeletons among Finnish nurses.

To summarize, the micro- and meso-level feld studies showed, from the point of view of knowledge-related needs and knowledge building, that training and learning related to care robots must include more than an explanation of technical issues. They must also cover a wide variety of different issues, such as time usage and task divisions, with managerial involvement. The provision of information and thus knowledge building are needed to enable integrating robot-assisted services in the daily services of the care units. The benefts of use should also be clarifed with regard to the characteristics of human-centered care work. Care personnel play a role in knowledge building towards their clients.

#### **The role of assistant nurses in care robot use**

Assistant nurses are an important part of care personnel. They support basic care and thus work at the grassroots level, closest to older adults with care needs. They form the largest professional group of Nordic social and health care (Ailasmaa, 2015). Yet researchers of technology use often overlook them (Glomsås, Knutsen, Fossum, & Halvorsen, 2020). According to our studies, understanding their perspectives and needs for knowledge seems essential for the implementation of care robots (Melkas et al., 2020b). With the increased use of technology, assistant nurses' tasks are also likely to include introducing new technology to older adults and supporting them in its use (Øyen, Sunde, Solheim, Moricz, & Ytrehus, 2018).

To understand the role of assistant nurses (and as part of their work communities) in robot technology use, and to contribute to future strategies for orientation to care robot use, Tuisku et al. (2022) examined assistant nurses' views of and need for receiving and giving orientation to care robot use in three European countries— Finland, Germany, and Sweden—using an online questionnaire developed based on earlier research (Johansson-Pajala et al., 2020). A total of 302 assistant nurses responded to the survey (Finland n = 117; Germany n = 73; Sweden n = 112).

According to the results, only 11.3% of assistant nurses had given orientation about care robot use to older adults or colleagues, but over 50% were willing to do so. Those with experience using care robots should take part in orientation. The most common information source regarding receiving orientation to care robot use was traditional media. Meanwhile, most nurses preferred to be introduced to care robot use through face-to-face interactions. In these introductions, they considered the most important pieces of information to be the benefts of a care robot (e.g., how it can assist caregivers). Respecting the different welfare systems per country, orientation to care robot use should be seen as part of care management and an issue that may affect future elderly care.

Assistant nurses are both receivers and providers of orientation to care robot use, and thus have the role of "mediators" of related knowledge. In this sense, they are indeed a critical group, as orientation to care robot use essentially relates to a mixture of practical and professional knowledge possessed by assistant nurses. Management should allow assistant nurses to get to know care robots by offering information and involving them in managerial discussion on how care robots can improve their work and facilitate older adults' meaningful and prolonged independent lives. Orientation to care robot use should be seen as part of care management and as an issue that may affect the whole organization (Tuisku et al., 2022).

As regards the relationship between robot technology and knowledge, we learned from surveying assistant nurses that it is important to understand them as both receivers and providers of orientation to care robot use, having the role of "mediators" of knowledge related to care robot use. Tailored orientation methods are needed to respond to the knowledge needs of assistant nurses, and orientation activities must form part of care management.

#### *Multi-level Studies*

#### **Macro-level stakeholders' views of the care robotics innovation ecosystem**

Societal actors and researchers still rarely discuss the societal and systemic levels related to the use of care robots, despite efforts to advance the use of robots in welfare services and various countries' initiatives to produce robotization strategies for those services. A wider and deeper understanding of the societal and systemic levels is missing, and ecosystem concepts could provide some assistance. Ecosystems are networks that gather complementary resources to co-create value (Moore, 1996) and involve cooperation, competition, and interdependence (Adner & Kapoor, 2010). Some scholars still regard the concept of the innovation ecosystem (Adner & Kapoor, 2010) as synonymous with the business ecosystem, whereas others differentiate the two (de Vasconcelos Gomes, Figueiredo Facin, Salerno, & Ikenami, 2018). De Vasconcelos Gomes et al. (2018) identifed a dividing line: The business ecosystem relates mainly to value capture, whereas the innovation ecosystem relates mainly to value creation.

We conducted a study in which we focused on the dynamics of the emerging care robotics innovation ecosystem in Finnish welfare services (Pekkarinen et al., 2019; Tuisku, Pekkarinen, Hennala, & Melkas, 2017). As innovation ecosystems have both an evolutionary nature and aspects of purposeful design, we examined the relevant actors, their roles, the accelerators, and the barriers by conducting a survey among relevant stakeholders in the innovation ecosystem. The online survey was completed by a range of Finnish stakeholders (n = 250), including service actors (n = 148) and research and development actors (n = 102). We identifed the care robotics innovation ecosystem as involving, on the one hand, service actors who are responsible for acquiring robots in welfare services (such as municipalities and hospital districts) and, on the other hand, research and development actors (decisionmakers, development organizations, research institutes, and robot-related frms), whose tasks are related to the development work of robots, and from different perspectives. The service actors have more hands-on expertise in welfare services than the R&D actors. We prepared for the survey by carefully identifying the stakeholders in this emerging domain in Finland, then analyzed the two groups' responses using a pairwise t-test.

According to our results (Pekkarinen et al., 2019), the Finnish care robotics innovation ecosystem is still largely in its nascent stage. Essential stakeholders are missing or involved in many additional activities. Among the variety of stakeholders needed, the most important groups that should be involved are private persons who use robots in their homes, customers of services that utilize robots, and professionals who use robots. This concerns both the discussion and product and service development related to robots. The R&D actors, in particular, emphasized that private persons who use robots in their homes and customers of services that utilize robots should be involved in public discussion and development activities. The respondents also indicated the important role of researchers in public discussion they are most likely to provide valid information based on empirical knowledge. The R&D actors seemed to think that more stakeholders needed to take part in the discussion than the service actors did. Overall, collaboration regarding the use of robots in welfare services remains rare. The R&D actors collaborated signifcantly more than the service actors. Service actors need to play a stronger role in the ecosystem.

Pilot studies with care robots have been loosely connected to the real aims of care (Pekkarinen et al., 2019). Robots should be integrated into other care technologies and into existing processes and information systems in care. We found the dynamics in the care robotics innovation ecosystem to be largely based on social and cultural issues. According to our results, three factors had the greatest effect on slowing down and hindering the introduction of robots: the care culture, resistance to change, and fear of robots. We found that Finland's piloting culture accelerates the introduction of robots and ecosystem growth in society, but that hindering factors such as fears and resistance have an impact. These hindering factors are largely attitudinal and are based on existing path dependencies rather than on technological limitations. Experimental projects in real-life contexts are seen as critical, as they bring together actors from various environments in shared networking and learning activities (Bugge, Coenen, Marques, & Morgan, 2017). However, as brought up in the context of the Zora study, a shortcoming in care robot research has been its conductors' focus on short-term trials and pilot projects (Andtfolk, Nyholm, Eide, & Fagerström, 2021); longitudinal multi-perspective research has been lacking. Thus, a certain tension seems to exist in the culture of piloting (for a discussion, see the sub-section on impact assessment).

Defning ecosystem boundaries is generally challenging, and the ecosystem's and individual members' successes may even confict. The creation of an "ecosystem mindset" is becoming important (see also Niemelä et al., 2021). Especially from a future-oriented perspective, ecosystem thinking may be developed with the help of education. In addition to increasing "hard" technical competences, education should cover issues related to the practical use of robots as well as work-life changes brought about by robot use. Those participating in the stakeholder survey highlighted: new abilities to process and analyze data; knowledge about data and cyber security, automation, and industrial management; understanding about social dimensions of robot technology, operational logic, and principles of robots as well as usability; skills in design of user interfaces and robotic devices; and knowledge about ethical issues and risks related to robotics. Educational institutions should build multidisciplinary programs that combine technical and welfare-related issues. Students of social and health care should gain certain technical competences, whereas those studying technology should gain competences in psychology and behavioral sciences. The survey respondents emphasized holistic understanding. Clearly, education can advance multi-sector and multi-professional skills and knowledge, as well as openness (Pekkarinen et al., 2019; Tuisku et al., 2017) and these competences are needed for future working life.

To summarize, regarding the relationship between robot technology and knowledge, the stakeholder survey showed that in the innovation ecosystem, users' knowledge—meaning here both private persons and care professionals—should be more visible in joint knowledge building. An ecosystem mindset is also related to joint knowledge building. Ecosystem knowledge can be advanced through education. Knowledge and competence needs that should be addressed in society and in workplaces are broad and diverse.

#### **Multi-level perspectives on care robot use**

#### Care robots in Finland: Overall fndings

To unearth a multifaceted picture of the situation in Finland (for international studies, see Hoppe et al., 2020; Johansson-Pajala et al., 2020; Pekkarinen et al., 2020), we conducted multi-level interviews at the micro-, meso-, and macro levels. At the micro level, 18 individuals participated in the focus group interviews (older people, their relatives, professional caregivers, and care managers). At the meso level (organizational and community level), 12 individuals participated in semi-structured interviews (representatives of companies, interest organizations or associations of social and healthcare professionals, interest organizations or associations of endusers/citizens (older people), organizers or providers of public social and healthcare services, and educational institutions for educating professionals for social and healthcare or welfare technology felds). The macro-level (societal level) participants included 11 individuals in semi-structured interviews (representatives of political decision-makers, research institutes, insurance organizations, funding organizations, and the media).

Analyzing our results, we learned that "the door is open" for robot use in Finnish care for older adults. The conductors of various pilots have offered several glimpses of this, but there is an obvious lack of knowledge about the benefts of robot use and a lack of understanding of robots' tasks in services, their integration into clients' services, collaboration between various stakeholders, and competence in management and procurement. The interviewees emphasized the problem of "projectnatured" pilots that lead to no permanent activities. On the one hand, inadequate, even skewed, information exists about the real opportunities of robot use in care for older adults; on the other hand, people have exaggerated expectations for, and fears of, the use of robots.

The attitudes of professional caregivers and clients towards robot technology varied in the study. Resistance was caused by the way in which robot use is marketed; marketing focuses only on economic concepts and underscores savings instead of quality of care. At all levels, interviewees strongly emphasized two issues: lack of knowledge and competence, and economic factors. At the micro level, they stressed several issues:


The meso-level interviewees emphasized the following challenges: the one-off nature of pilots; levelling up of robots into the structure of the care system and vocational education; management and its support related, for example, to resistance to change; and a lack of shared national-level practices and guidelines. The macrolevel interviewees highlighted the following challenges: uncertainty of the roles of different stakeholders, lack of a "knowledge concentration," and inadequacy of steering and funding mechanisms. Some interview quotations follow:

When robotics are discussed, I think it [the term] can be misunderstood badly … When the concepts become clearer, and what each of them means, there won't, perhaps, be this confusion, suspicion, or prejudice towards it. (Interest organization for end users)

I see that a positive vision essentially means that different stakeholders—and, you could even say, the general public—understand what robotics is and what it is not; what it is used for and what it is not used for … A negative vision is probably that this technology is brought to the feld without anyone except technology developers really knowing what the technology is and why, or for what purpose, it is brought into use. (Research institute)

With these multi-level interviews, we confrmed the importance of integrating care robot-related issues into the education of future care professionals early in their studies. Basic education at all levels of social and health care should include education on care robotics. According to the interviewees, care robotics is not a separate issue to be discussed in some special courses—as it is nowadays—but must be integrated into everything that is taught:

If the Swedish language is taught, then the relevant concepts in Swedish are taught, and if care work is taught, or care for some particular illnesses, then the opportunities [of robotics] there or in that illness should be taught. (Caregivers' interest organization)

The interviewees brought up good examples of educational pilots in vocational education—cross-disciplinary programs—but they noted that new occupations and occupational groups will emerge, which increases the need to understand each other's work and the big picture. As technology may become outdated, those designing basic education in social and health care should not settle for teaching the use of individual devices but should create capabilities to see and develop robot use as a wider topic.

#### Knowledge brokerage

Knowledge brokerage—the value of knowledge brokers, actors who "translate" diverse stakeholders' different "languages" for the common good—requires attention in robot use more generally and particularly in care robotics ecosystem development (Parjanen, Hennala, Pekkarinen, & Melkas, 2021; Pekkarinen et al., 2020). According to Burt (2004), brokerage (or brokering) could occur by making people on both sides of a structural hole aware of the other group's interests and diffculties, transferring best practices, drawing analogies between groups ostensibly irrelevant to one another, and synthesizing knowledge interests. We analyzed the multi-level interviews from this perspective to identify macro-, meso-, and micro-level brokerage needs, functions, and roles in care robotics innovation ecosystems and networks, as well as the kinds of knowledge that should be brokered at these different levels.

According to the results (Parjanen et al., 2021), emerging care robotics ecosystems and networks need brokerage functions to create operational conditions, bring disparate actors together, manage innovation processes, create learning possibilities, and share best practices. However, this brokerage must vary by level, indicating that the functions and roles of brokers and brokered knowledge may be emphasized differently. At the macro level, actors need system-level knowledge; at the meso level, they require knowledge related to innovation process management and user knowledge; and at the micro level, experimental and tacit knowledge takes precedence. Interest organizations of end users, for example, have an important role to play—they diffuse knowledge, as from the employees of the social and healthcare sectors or clients of care homes to the decision-making levels. The interviewees stated that it is essential for user knowledge to be collected by a neutral actor to better reveal the impacts of care robots. One broker or brokering organization typically has several roles, such as policy executor, creative actor, crosser of distances, shaper of organizations, and sniffer of the future (Parjanen, Melkas, & Uotila, 2011; Parjanen et al., 2021).

#### Socio-technical transition

Along with the ecosystem perspective, we have used the perspective of sociotechnical transition in our research to focus on the societal level. In Pekkarinen et al. (2020), we tackled the socio-technical transition—a multi-level change with a reconfguration of the social and technological elements of the system—of elderly care. Socio-technical transitions differ from technological transitions in that they include changes in user practices and institutional structures (e.g., regulatory and cultural) in addition to the emergence of new technologies (Markard, Raven, & Truffer, 2012). This is essential to consider, as a sector such as elderly care is traditionally seen as being based on human work and values. We examined the transition in the elderly care system and the conditions of embedding robots in welfare services and society in three European countries—Germany, Sweden, and Finland. We studied the ongoing change in elderly care services and the introduction of robotics in the feld in terms of the multi-level perspective on transitions (e.g., Geels, 2002, 2004, 2005; Geels & Schot, 2007), a central framework facilitating the study of socio-technical transitions. With this approach, we highlighted the interdependence and mutual adjustments between technological, social, political, and cultural dimensions (Smith, Voss, & Grin, 2010; Bugge et al., 2017).

The interviewees represented the regime level in the transition framework; they acted as intermediaries at the interface between, for instance, end users and decisionmakers, but also between the niche-level actors and landscape-level changes. In our qualitative study, we focused on the current situation in the use of robots in elderly care as well as advancing and hindering elements in integrating robots into society and elderly care practices. According to the results (Pekkarinen et al., 2020), there is a shift towards using robots in care, but remarkable inertia exists in both technological development and socio-institutional adaptation. Advancing and hindering elements in transition are both technical and social and increasingly interrelated, which those creating management and policy measures must consider to facilitate successful future transition pathways. The change in attitudes and embedding of robots into society are promoted, for instance, by raising relevant knowledge on robots at different levels.

We concluded (Pekkarinen et al., 2020) that the care currently provided solely by human caregivers seems to be shifting towards care provided through collaboration between human caregivers and technologies, but that the rules and practices for this work division are still unclear. There is almost mythical talk that "the robots are coming," but when, how, and in which conditions, what it means in practice, and what their place will be in the care context are still largely undefned issues sparking discussion. In socio-technical terms, several "socio-technical negotiations" (see Akrich et al., 2002) seem to be ongoing within the regime. There is still no clear pathway to collaboration, and although there is much interest in robotics in elderly care, mainly due to economic pressures, attitudinal and other constraints exist. We listed three general-level socio-technical scenarios: (1) human-oriented care, in which robots assist just a little or in certain tasks, mainly on an experimental basis; (2) care produced jointly by humans and robots, with a smooth and well-defned division of labor; or (3) technology-oriented care, where humans act mainly as "interpreters" and "backup" (Pekkarinen et al., 2020). Although how different countries react to the transition remains to be seen, further research on the role of knowledge in socio-technical transitions is needed.

#### Impact Assessment and Co-creation at Different Levels

Continuous and early impact assessment (emphasized in the Zora study; Melkas et al., 2020b) is an essential element at all three levels. Importantly, care robot implementation research needs attention, as its conductors provide a longer-term view of robot integration challenges than those conducting pilot studies. Impact assessment—conducted on a continuous basis and early enough, not just as ex-post evaluation—may unveil invisible or seemingly irrelevant processes and stakeholders that should be considered in corrective actions when negative impacts are observed. Opportunities for implementation research have been slowly increasing in Finland (e.g., Melkas et al., 2020b). Piloting is often seen as a process that, at best, starts with the collection of information and ends with evaluation. Evaluators seek to discover factual information on, for example, users' experiences concerning the robot's benefts, challenges, and usability. When considering the innovation ecosystem perspective and, generally, the multi-level perspective, we have found that implementors should approach integrating robotics into welfare services as a cocreative piloting and implementation culture within the wide ecosystem, rather than as a process (Hennala et al., 2021). Actors in such a culture would emphasize the whole of care (the architecture, processes, actions, and ways of thinking) into which robots are being brought, at the different levels—micro, meso, and macro—and any interfaces between them.

The focus should be on paying close attention to what takes place and emerges during the pilots and implementation, particularly the kinds of dynamics that occur and who is truly involved in the co-creation (the users, notably). From the perspective of managing such a cross-cutting culture and the innovation ecosystem, it is essential to understand and utilize such focused knowledge by, for example, strengthening positive elements and weakening or eliminating the negative aspects identifed in our studies. Management of a co-creative piloting and implementation culture is obviously demanding, as co-creation within the integration of robotics comprises not only direct interaction between diverse people, but also factors such as professional identities, managerial practices, "states of mind," feelings, responsibilities, and future horizons (Hennala et al., 2021).

Altogether, with our multi-level studies we confrmed numerous knowledge and knowledge building-related needs, such as a general lack of knowledge about the benefts of robot use and robots' tasks in services, their integration into clients' services, and collaboration between various stakeholders. Knowledge is also needed to build up competence in management and procurement, and to help address people's exaggerated expectations for, and fears towards, the use of robots. Knowledge needs to be nurtured early, such as during the education of future care professionals. Knowledge brokers—actors who "translate" diverse stakeholders' different "languages" for the common good and are aware of different types of knowledge—are essential, as is elaborating on relevant knowledge about robots at different levels to promote successful socio-technical transition and innovation ecosystem development. Some of these fndings were already visible in our micro- and meso-level feld studies, but a multi-level perspective is essential in this topic.

#### **Discussion and Conclusions**

With the different studies we presented in this chapter, we have focused on knowledge and knowledge building in many ways, whether regarding the question of clients of services utilizing care robots, their relatives, professional caregivers, or other groups or levels. The relationship between knowledge and technology is complicated and multifaceted, and we have discussed it by focusing on the use of care robots. We have offered a synopsis of our most recent care robot studies, conducted on the macro-, meso-, and micro levels. Technological change requires numerous changes in knowledge, yet the essential concept of knowledge may be handled in an aggregate way that hides much of its potential. Knowledge is not a stable or homogeneous issue; researchers have previously identifed numerous types of knowledge. In future, researchers could also consider discerning different types of knowledge during the multi-level technological change affected by the emergence and implementation of robot technology. In the remainder of this chapter, we focus on our core concept related to knowledge and knowledge building: orientation to care robot use. We also propose practical orientation pathways on the basis of our research and a guide that we have written on this topic (Melkas et al., 2020a).

#### *Orientation to Care Robot Use*

By presenting a compilation of recent micro-, meso-, and macro-level studies on care robots, we have elaborated on the relationship between robot technology and knowledge and aimed at unveiling a more systemic view into the knowledge related to care robot use. We propose to shift the focus from mere training—provision of information—to a more comprehensive understanding of *processes and actions* towards knowledge building in this area as a prerequisite for reaping the benefts of care robot use. Various concepts related in one way or another to knowledge and technology may obscure the essentials—concepts such as acceptance, adoption, and assimilation or introduction, familiarization, domestication, and embedding. We also used multiple concepts in our research. Whereas previous researchers have discussed training, especially when new technology is adopted, the focus of our research synopsis is broader—advancement of an increasingly systemic and multilevel perspective on knowledge building—with the aim of expanding the relatively narrow focus of training towards a more comprehensive and interactive process and action focus.

We propose *orientation to care robot use* as a key issue in societies, workplaces, and homes, and defne it as a continuous co-creative process of introduction to technology use and its familiarization, including learning of multi-faceted knowledge and skills for its effective use (see also Johansson-Pajala et al., 2020; Melkas et al., 2020a). With "co-creative process," we are referring to collective action with differing roles and participants, and the importance of identifying opportunities and cocreating practical possibilities through a process of sharing knowledge in dialogue (Bergdahl, Ternestedt, Berterö, & Andershed, 2019). "Introduction to technology use and its familiarization" is related to user involvement among professionals in the implementation of technology in care services (Glomsås et al., 2020). "Learning of multi-faceted knowledge and skills for effective use" covers care professionals' involvement, knowledge, and ownership, which researchers have shown to be important success factors in innovation processes in the workplace (Framke et al., 2019; Tuisku et al., 2022). We regard this perspective as complementing existing technology acceptance and diffusion models whose creators focus on the different stages of technology adoption (Khaksar et al., 2021). We focus on the processes and actions taking place, or needing to take place, on different levels; *how* adoption, acceptance, and meaningful use of care robots can be facilitated; and on understanding this process as inherently social action taking place among orientation givers and receivers, in addition to a more individual-level action (Tuisku et al., 2022; see also Melkas, 2013).

Referring to Venkatesh et al. (2003), our understanding of orientation is particularly related to the "facilitating conditions" construct. It is the action of orientating oneself or others. It should not be a one-time activity (when a device or solution is brought to use) but an ongoing process. We thus understand the construct as much more than (initial) training; as a process, it should also be able to "absorb" critical views and questioning attitudes. The word "orientation" itself does not have the self-evident positive nuance of "acceptance" or "adoption"; thus, it may be considered more neutral. Many studies stop at seeking to understand what affects the adoption of technology, for example, among care professionals, to provide new knowledge for introducing and implementing various technologies in care in the future. However, they fail to take into account the orientation-related "doing part." Innovation scholars call the experience-based mode of learning and innovation the "doing, using, and interacting" (DUI) mode (Jensen, Johnson, Lorenz, & Lundvall, 2016). Our understanding of orientation resembles that kind of thinking (see also Tuisku et al., 2022). Learning "skills for effective use" (included in our defnition) is at stake here.

The agency of multi-level actors from public, private, and non-governmental sectors is needed for developing orientation processes and actions in broad collaboration. Essentially, we claim that such an understanding of orientation to care robot use is a way of thinking, not only a question of practical processes and actions. For example, emphasizing the roles of orientation givers and receivers may renew one's thinking, even about one's own role, as dual roles may exist in practice (e.g., among

care professionals or societal decision-makers). In other words, actors must understand the co-creative process (included in our defnition of the concept); orientation to care robot use is neither mere training nor one-way knowledge transfer intervention. The relationship between knowledge and orientation is two-way. On the one hand, we believe that orientation is necessary for knowledge building; on the other hand, we include learning multi-faceted knowledge in (our defnition of) orientation to care robot use. This relationship may differ partly depending on the level of detail and discussion's context.

#### *Orientation Pathways*

At present, [the discussion] concentrates more on whether robots can care for people or not, and as, in my opinion, it is quite clear that humans can never be replaced, I am frustrated. Are we really concentrating on this now, when there are so many other things that should be discussed? (Political decision-maker)

We now turn to discussing orientation pathways in a more concrete sense. We have proposed the *why*, *what*, *who*, and *how* aspects of orientation to care robot use as a foundation for the creation or refnement of orientation practices at the user (micro-), organizational and community (meso-), and wider societal (macro-) levels, depending on the context (Johansson-Pajala et al., 2020; Melkas et al., 2020a). Different societal levels imply different kinds of stakeholders playing the central role in the care robot discussion and orientation (see the interviewees in section "Care robots in Finland: Overall fndings", or Melkas et al., 2020a).

In Figure 2.3, we show the levels, some examples of stakeholders, and their tasks. The organizational and fnancial models, as well as patterns of necessary collaboration, depend on the country and other circumstances and prerequisites. Orientation to care robot use should contain several phases in a continuous way, and the stakeholders and their tasks may differ depending on the phase. Because care robots are very diverse, different robots may require emphasizing different aspects. The variety of robots available for a wide range of care tasks produces further knowledge needs. For people with different illnesses or diverse needs (e.g., people with disabilities), different kinds of orientations may also be necessary (Melkas et al., 2020a). In general, care services are a demanding application area for service robots, as many clients, such as the "oldest old," may be vulnerable and fragile.

Each aspect—*why*, *what*, *who*, and *how—*requires careful attention and planning (for further details, see Johansson-Pajala et al., 2020; Melkas et al., 2020a), and at the different levels, as we have implied with our research. Orientation is one way to increase knowledge and provide practical, user-centered learning to improve the acceptance of care robots and promote inclusive technology use. It needs to be seen as processes and actions taking place among orientation givers and receivers at different levels. Pilot study researchers and those engaged in early implementation efforts have identifed various obstacles to the acceptance of care robots and defciencies in their use. This knowledge needs to be put to use to tackle shortcomings

**Fig. 2.3** Examples of stakeholders at different levels and examples of their tasks associated with care robot orientation. Source: Adapted from Melkas et al., 2020a, p. 33–42 . Copyright 2020 by Authors. Adapted with permission

in training by technology providers, overcome the neglect of care organizations, care professionals, clients, and their relatives' specifc needs, and consider different ways in which individual people learn new things.

As for older people, care robots may potentially have an important impact on the quality of individuals' lives, their engagement with others, and their participation in wider society. Realization of this potential requires better understanding of the preconditions of care robots improving older people's life, contribution, and social engagement; practical information on how to deal with current and future shortcomings in care robot use; and policy development. Opportunities for learning about care robots must be provided for older people and those around them, as well as, systemically, for society at large, for the beneft of policy development (see also Fig. 2.4).

Orientation to care robot use is also necessary for both potential and present users. The variety of robots itself generates further needs. Different groups may require different dimensions of orientation, depending on the receiver, the provider, the type of robot, and the context. Some may fnd general orientation suffcient (mainly responding to the "what" question), whereas others may require experiencebased orientation from their peers, orientation as part of education, technically focused orientation, orientation tailored to managerial or administrative issues, or orientation for collaboration in the feld of care robotics (between organizations, networks, etc.). If actors continue to overlook such wider orientation, it is likely that the potential benefts of robot use will remain unrealized, and investments will be wasted.

**Fig. 2.4** An illustration with key messages on orientation to care robot use from a guide by Melkas et al. (2020a). Source: Reprinted from Melkas et al., 2020a, p. 52. Copyright 2020 by Authors and Petri Hurme, Vinkeä Design Oy. Reprinted with permission

Older adults need to be able to voice their needs, expectations, and wishes personally without others appointing themselves their spokespersons. Nor should orientation rely on the prevailing stereotypical perceptions of older adults. The whole orientation process, from design to implementation and follow-up, should be characterized by a user-centered approach, not a focus on technical ambitions. Orientation should not stop when care robot technology has been introduced and essential skills have been learned. When considering the necessary skills, relevant questions also concern the role and usefulness of robot technology in care services—for example, what are the aims of using it? These aims may remain unclear to many stakeholders, especially in the hype that can sometimes be heard in care robot discussions.

So far, the wider societal level of orientation towards care robot use has been overlooked. The demands and prerequisites differ from those at the user level, although they share similar characteristics. Consequently, a prudent long-term strategy is needed, involving all stakeholders, including the user, organizational, and societal stakeholder levels, to provide a solid and well-founded orientation. This is what we mean with "pathways for orientation to care robot use": seeing the importance of orientation at the level of people and society, fnding one's own appropriate way of implementing it, and internalizing systems thinking, including listening to the needs of diverse users.

Actually, our diversity increases; it doesn't decrease. Among older adults, there is a spectrum of life experiences, education, preferences, health conditions, experienced health, and all; it is huge. This implies the need for modularity and applicability. Maybe there cannot ever be an ideal solution. [We must ask] "What serves whom?"; otherwise, the risk increases that we will do completely the wrong things, because it is so diffcult to understand. I don't even understand what it is like to be 94 or what it really means when your back is hurting when you walk. (Political decision-maker)

**Acknowledgements** This research was supported by Academy of Finland, Strategic Research Council ('Robots and the Future of Welfare Services' – ROSE project; decision numbers: 292980 and 314180) and ORIENT project under the JTC 2017 launched by JPI MYBL. The support of the JPI MYBL and our national funder Academy of Finland (decision number 318837) is gratefully acknowledged.

#### **References**


**Helinä Melkas**, Doctor of Science (Technology), is Professor of Industrial Engineering and Management, especially service innovations at Lappeenranta-Lahti University of Technology LUT, School of Engineering Science, Finland, and Professor II at University of Agder, Centre for E-health, Norway. Her research interests are digitalisation particularly in healthcare and social care services, robotics, health and welfare technology, user involvement, impact assessment and innovation management. She is actively involved in the Nordic Research Network on Health and Welfare Technology and in Lahti Living Lab, inter alia.

**Satu Pekkarinen**, PhD, is Associate Professor on sociotechnical transition of services at Lappeenranta-Lahti University of Technology LUT, School of Engineering Science, Finland. Her research interests are sociotechnical transitions, service digitalisation as well as implementation and use of health and welfare technologies in care services. She has published tens of scientifc articles on the topic and is an active member of national and international research networks.

**Lea Hennala**, PhD, innovation systems, recently retired from a position as a senior researcher at Lappeenranta-Lahti University of Technology LUT, School of Engineering Science, Finland. Her research areas include implementation and use of care robots in elderly care, user involvement, and co-creation of service innovations in both public and private sectors.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 3 On the Need to Understand Human Behavior to Do Analytics of Behavior**

**Joachim Meyer**

In our current "age of data", Artifcial Intelligence (AI), Machine Learning (ML), Data Science (DS), and analytics are becoming part of problem-solving and decision-making in many areas, ranging from recommendations for movies and music to medical diagnostics, the detection of cybercrime, investment decisions or the evaluation of military intelligence (e.g., McAfee & Brynjolfsson, 2012). These methods can be used because an abundance of information is collected and made available. Also, the tools for analyzing such information are becoming widely accessible, and their use has become easier with platforms such as BigML. While in the past, statisticians or data scientists were in charge of the analytics process, now anybody with some basic computing skills can conduct analyses with R or Python, using open-source tools and libraries.

These developments are the basis for new insights and understanding social and physical settings. They also alter the decision processes used by organizations and the information that is available to individuals. As such, they affect reality, its representation in digital records and the media, and the ways people interpret this reality and act in it. The dynamic interaction between the physical, digital, and social realms shapes current societies. Understanding and modeling it is a major challenge for both data science and the social sciences.

Data analytics, and the information one can gain from them, can be used in decisionmaking processes, in which they help to choose among possible alternatives. Algorithmic decisions can be advantageous in legal contexts, such as bail decisions (Kleinberg, Lakkaraju, Leskovec, Ludwig, & Mullainathan, 2018). In medical settings, the development of *personalized evidence-based medicine* for diagnostic or treatment decisions (Kent, Steyerberg, & van Klaveren, 2018) depends on analyzing electronic medical

J. Meyer (\*)

Department of Industrial Engineering, Tel Aviv University, Tel Aviv, Israel e-mail: jmeyer@tauex.tau.ac.il

<sup>©</sup> The Author(s) 2024

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_3

records with data science tools. AI-based analyses in medicine can indeed improve diagnostic or therapeutic decisions (Puaschunder, Mantl, & Plank, 2020). Similarly, algorithms in fnancial markets, implemented as algorithmic advisors or in algorithmic trading, can provide clear benefts (Tao, Su, Xiao, Dai, & Khalid, 2021).

Together with the large potential benefts for decision-making one can derive from data science, there are also potential dangers. For instance, in medicine, clinical decision support systems can exacerbate the problem of alarm fatigue by generating numerous alarms that have limited clinical importance, or they can have a negative effect on physicians' or nurses' skills if the medical staff learns to rely on the support and does not practice independent decision-making (Sutton et al., 2020). In fnancial markets, algorithmic decision-making can also be problematic, causing possible systematic anomalies, such as *fash crashes* (Min & Borch, 2022).

#### **Decision Quality and Data**

The desire to improve decision-making is often the rationale for information collection and for making this information available. A major premise in research on decision-making is that the quality of a decision depends on the quality of the information on which it is based (Raghunathan, 1999). Ideally, information should provide the decision-maker with as accurate a picture as possible of the expected results from choosing one rather than another course of action, given the conditions in which the decision is made, the developments over time that will occur, and any other factors that need to be considered. This will depend on the properties of the available information and on the decision-maker's understanding of the causal processes that determine outcomes.

While data science was mainly developed in organizational contexts, such as business administration, transportation, or medicine, the notion exists that data can also be used by individual citizens or households. Access to data can help them, for instance, decide on investments based on the analysis of relevant economic variables. Data can also help in choosing a neighborhood where one wants to live, depending on information about the education system, crime levels, scores of individual happiness, or other relevant variables. This view, together with the ease of collecting and making data available, led to the idea that citizens should have access to data to use it to make informed decisions (e.g., Marras, Manca, Boratto, Fenu, & Laniado, 2018).

If one takes the notion that the quality of the data determines the quality of the decisions to an extreme, one could argue that appropriate analyses of the data make decision-making unnecessary. The results of the analysis point clearly to the alternative that should be chosen. This is indeed implemented, to some extent, in contexts in which algorithms make most decisions, such as algorithmic trading in fnancial or other markets (Virgilio, 2019).

The optimistic view of the value of data is not limited to decision support. The claim has been made that with the emergence of data science, the availability of large volumes of data, and the development of very effcient algorithms to analyze the data, there will be an end of theory (Anderson, 2008). One does not need theories anymore to explain phenomena, but rather one can simply look at the data to understand a phenomenon. Some observers may consider this as a step forward from the conundrum that is caused by the multiple theories of social phenomena that often have relatively limited predictive value and the replication crisis that plagues, for instance, psychology (Jack, Crivelli, & Wheatley, 2018). So far, however, this expectation has not received any support.

The availability of data may support a better understanding of the world that can be used for policy, organizational or individual decision-making. It can also be formalized in scientifc generalizations regarding social phenomena. These developments may provide major opportunities for technological, economic, social, or intellectual progress. However, some caution may be warranted when considering these possible developments, and specifcally, the hope that algorithms can help people make better decisions.

In the following sections, I will frst show that automating decision-making has great potential. However, human involvement in the decision processes may be diffcult to implement or may, at times, be practically impossible. This does not mean that there is no need for human involvement. I will argue that human involvement is crucial for understanding the processes that create the data that are input for the analyses and generate the results.

#### **The Human Role in Decision-Making When an Intelligent System Is Involved in the Process**

Any analytics-based decision support an organization wants to implement needs to be integrated into the decision processes the organization (or an individual) uses. Specifcally, the organization must decide on the appropriate use of the information from the decision support. To what extent should decision-makers (such as physicians who need to make diagnostic or treatment decisions) rely on the information an algorithm provides, and when can they override it? For the decision support to be useful, it needs to be good, that is, the quality of the recommendations should be similar to or better than decisions made by people without the support. There are indeed decision support systems that reach such a level of performance, for instance, in the AI-based detection of early-stage breast cancer (McKinney et al., 2020). However, when introducing decision support, it is unclear how humans should be involved in the decisions. Three forms of human involvement in decisions turn out to be problematic.

First, it is often suggested that the AI output should serve as support for the human decision-maker, a notion captured by the term *decision support*. When decisions are relatively clear, such as the decision whether a lump is a malignant tumor or not, the output of the decision support can replace the human decision-maker if the decision support is better than the human. It is problematic to assume that we can simply provide decision-makers with the output of the decision support, and they will be able to integrate it correctly into their decision. To do so, they must assign appropriate weights to the information they have and the additional information the decision support provides. Empirical research on people's ability to use decision support shows consistently that people often assign nonoptimal weights to information from different sources. They tend to give too little weight to better information sources and may assign excessive weight to bad information sources (Meyer, Wiczorek, & Günzler, 2014). Also, when the human and the automation differ in their ability to perform the detection task, it is very diffcult to improve the performance beyond that of the better of the two acting alone (Meyer & Kuchar, 2021).

Second, it is also unrealistic to assume that people can adjust the parameters of the automation to make it work better. Here, too, empirical research has shown that people often set incorrect system parameters, especially if they don't get the optimal information for setting the parameters (Botzer, Meyer, Bak, & Parmet, 2010). Furthermore, the number of observations needed to determine the correct setting of a system parameter is often so large that it is simply impossible for a person to collect suffcient information to determine the setting (Meyer & Sheridan, 2017). Thus, one can either specify rules on how parameters should be adjusted (which can then be easily automatized), or one can use fxed parameter settings. In both cases, human involvement is unnecessary.

Third, another widely held approach is used, for instance, in the discussion of autonomous lethal weapon systems or the protection of citizens from algorithmic decisions, as required by Article 22 in the EU General Data Protection Regulation (Roig, 2017). This demand may also be unrealistic. A system that is better than the human decision-maker in a decision task will lower the human involvement in the task and the human responsibility for outcomes (Douer & Meyer, 2020, 2021). Consequently, it may seem that humans have no actual role in the decisions once there are good AI-based algorithms that can support the decisions making.

The development of processes that rely on algorithms without human involvement may not be bad. Meehl already showed in 1954 that *statistical predictions* (namely predictions based on statistical tools, such as linear models) are better than *clinical predictions*, the predictions made by human experts (Meehl, 1954). This conclusion has been consistently replicated (Dawes, Faust, & Meehl, 1989; Grove & Lloyd, 2006). Furthermore, there may be an inherent tendency to avoid information from algorithms, which may lead to the nonoptimal use of algorithmic decision support (Dietvorst, Simmons, & Massey, 2015). Thus, algorithmic decisions are potentially better than human decisions, even if high-quality algorithmic decision support is available to human decision-makers.

#### **The Analytics Process as a Human Activity**

A simplistic view sees the data science as a way to reach insights and to make decisions that are as objective, evidence-based, and "mathematically correct" as may be possible. However, a closer look at the process by which results are obtained reveals that matters are more complicated. In fact, any analytics process involves a sequence

**Fig. 3.1** The data science process. Source: Design by author

of choices and decisions made by people throughout the process (see Fig. 3.1 for a schematic depiction of the process). Some choices may simply be based on the analyst's intuition or habit, may follow a default option, or use a convention in the feld. In contrast, some decisions may result from carefully weighing the advantages and disadvantages of different courses of action based on systematic analyses and understanding of the specifc problem.

Decisions are made at all points at which there are arrows in the fgure. At each point, the person performing this part of the analytics process (who may differ from the people who perform other parts) must select one of a number of possible alternatives. It is important to analyze the selections because they may strongly affect the results obtained in the analyses. So far, this issue has gained relatively little attention. However, studies did show that different groups of data scientists may reach very different conclusions when analyzing the same data set.

Any analytics process that is related to decision-making begins with some questions the process is intended to answer. The posing of the questions results, of course, from decisions. The process itself begins with creating the data set that will be analyzed. First, relevant records need to be *located*. Data sources can be, for instance, patients' electronic medical records, court records, recordings in a call center, and so forth. An important part of the creative use of data science is coming up with possible sources of data that can be analyzed. The raw data must be adapted to serve as input for the analyses. It is necessary to *select* the specifc data that will be analyzed. This includes defnitions of the variables and the temporal and geographic limits of the data to be analyzed, thas is, data from how many years back or from what locations does one want to analyze? If data is collected over a large area, should one analyze all subregions or focus on specifc regions? An analyst may, for instance, choose to ignore more rural parts and focus on cities. Certain subpopulations may also be excluded from analyses. For example, in Israel, ultraorthodox Jewish neighborhoods are very different in many respects from other neighborhoods. For instance, the use of smartphones is limited, and web browsing can be socially sanctioned. Consequently, their inclusion in some analyses may create biased results.

The raw data is combined into fles that can be analyzed. These data then undergo processes of *data preprocessing*, where it is cleaned, duplicate records and outliers are identifed and possibly deleted, and so on. The defnition of outlier values is in itself a decision the analyst needs to make. Some values are clearly faulty (a parent who is less than 10 years old, according to the birthdate on record), but others are less clearly outliers. Is spending 60% of one's income on restaurants a legitimate value or an error in the data?

After preprocessing the data, one must *prepare the analysis* by choosing the specifc algorithm to use. One then actually runs the algorithm. This, too, requires choices, such as the defnition of parameters. Every algorithmic tool is sensitive to certain properties of the data and less sensitive to others. Each tool is more likely to reveal certain phenomena and less likely to reveal others. Hence, the choice of the tool and the parameters are likely to infuence the results.

For instance, in one study, 29 teams of data scientists received the same data set, aiming to test the hypothesis that soccer referees give more red cards to players with darker skin color than to players with lighter skin (Silberzahn et al., 2018). The groups used 21 unique covariate combinations in the analyses. About two-thirds of the group showed a signifcant effect in the expected direction, while one-third did not. Thus, the choice of the analytical method is by no means determined by the data and the research question.

The next step is *defning the output* of the algorithm, which can be presented in numerous ways, and the analyst must decide which one to use (Eisler & Meyer, 2020). The different presentation modes will make different types of results more or less salient. This may depend on the particular aspects the analyst, or those who requested the analysis, consider important and want to emphasize. For instance, the presentation of results by executives depends to some extent on the quality of a company's business results. When business results are not very good, there may be a tendency to use more elaborate graphics (Tractinsky & Meyer, 1999).

At the end of the process, one reaches the *interpretation of the results* and the drawing of conclusions from them. Different people may focus on different aspects of the results, depending on the individual's preferences or preinclination, tendencies, interests, and so on. One should also remember that only in academic or research settings are analytics purely done for analytics sake. Beyond research, analytics serve some purpose. Someone wants to make a decision, such as a clinical decision in medicine or a policy decision regarding municipal, regional, or countrywide policies, or maybe a business decision in a company, and so on.

Thus, data science and data-based AI are complex processes, with decisions at numerous points along the way. All these decisions involve stakeholders, and the choices will depend to some extent on factors such as the beliefs, preferences, or the costs and benefts of the people involved in this process. The decisions determine and affect the course of the analytics process. They will affect what can be analyzed, the questions that can be asked, the tools that are used, and the insights that can be gained. It is of great importance to understand these decisions to create awareness of their possible impact on the outcome of the analytics process.

In a large-scale study (Botvinik-Nezer et al., 2020), 70 teams of scientists analyzed the same functional magnetic resonance imaging data set. No two teams chose the same workfow for the analyses, leading to large variability in the results the different groups reached. The results of a study on microeconomic causal modeling are similar. Different teams of analysts went through different analytics processes for the same data set, resulting from many decisions each team made that were not made explicit (Huntington-Klein et al., 2021).

This is not an argument against the use of data science in decision-making. Data science can defnitely provide valuable new tools and methods to support decisionmaking. However, data-science-based decision-making is not without problems. Very often, the people who do data science come from a computer science or mathematics background. This does not necessarily prepare them for critical analyses of the analytics process. The decision support is then often evaluated in terms of the elegance of mathematical solutions or algorithms or the quantitative evaluation of algorithm output, compared to some benchmark, in measures such as precision and recall, the area under the curve (AUC), the F1 score, and so forth (Padilla, Netto, & da Silva, 2020).

The output of an algorithm needs to be compared to some accepted measure of the reality it is supposed to refect, what is often referred to as the "ground truth." For instance, an algorithm that is supposed to predict complications in medical treatments needs to be run on data for which the occurrence of complications is known. The extent to which the algorithm correctly predicts which patients will experience complications indicates the quality of the predictions. The evaluation of algorithm output with statistical tests of its match to some "ground truth" creates the impression that the process is objective. However, seen from a somewhat critical perspective, data science is a human activity that is concerned with human actions. It is necessary to understand this activity to make adequate use of these methods for decision-making and the understanding of phenomena.

#### **Considering the Data Generation Process**

Human behavior and decisions not only affect the analytics process in which data serve as input and conclusions are derived from the series of analytical steps. The data generation process itself is not a simple recording of events that occurred. This process creates the traces that eventually will become the analyzed data. This by itself often refects human activities.

Any individual can observe only certain, very limited parts of reality. Information about other parts can be conveyed by other people (lore, tradition, teachings, gossip, social networks, news, etc.). Computers, and our digital age, create an additional level of complexity. One could argue that it is only a quantitative change from the past to the present, in which more information about events that are not directly observable is now available. However, there is also a qualitative change. We receive much information about the world (be it the physical or the social world) from digital representations.

This information, in turn, may affect our actions in the physical or social worlds (a navigation aid that guides cars may create congestion in certain places). A recommender system that informs us about a certain venue may affect our behavior and the subsequent physical reality.

The digital representation itself is not simply a partial refection of reality. It also refects the decisions and behaviors of the people who were involved in the collection of information and its recording. These decisions can be direct actions that affect the occurrence of recorded events, decisions regarding the recording (e.g., what is recorded), and decisions regarding the recorded data (categories, etc.).

A data-science-based decision process aims to base decisions on data, and the data provide a glimpse at the reality that will serve as the basis for the decisions. The analysis of the data is supposed to provide insights into this reality. The approach to reality can be seen as the interplay between three realms (see Fig. 3.2). There is an individual who observes the physical world and interacts with it. Parts of this physical world are other people, so interactions are also happening in a social context. Both the physical and the social realms may leave digital traces in the form of records of activities conducted in organizational settings, social media posts, recordings from sensors that are positioned in the environment (e.g., cameras) or carried on the person, such as the person's cellphone that allows the recording of locations and communication activities. The output from the digital realm may affect social interactions and, to some extent, can even affect the physical reality, for instance,

**Fig. 3.2** The individual interacting with the interdependent physical, social, and digital realms. Source: Design by author

through responses to traffc advisory systems that direct vehicles according to traffc measurements.

Individuals interact with all three realms. They act in the physical world, for instance, by purchasing certain goods or moving to a different location or by performing some physical activity. This is often done in close interaction with other people, such as family, neighbors, friends, colleagues, service providers, or people who have some other encounter, relation, or interaction with the person. These interactions are facilitated by digital means and create digital traces.

The records of an individual's social interactions are becoming part of a digital representation of reality. These traces, in turn, will be the basis for data sets that can serve as input for analyses we may want to conduct to gain an understanding of reality. The data sets may contain records of the individual's behavior or properties of the physical world or properties of the social context or properties of the interactions between individuals or between individuals and the social or physical realms. So we have a complex dynamic interplay between physical entities, social relations and interactions, and digital representations. To understand these multifaceted phenomena, combining qualitative and quantitative research approaches is often necessary. This is in line with the proposed *combination of methods* in the study of social networks (Glückler & Panitz, 2021), in which qualitative and quantitative methods are jointly used to study processes and properties of social interactions.

#### **Big Data of Nonexisting Data**

In this digital representation, we expect to fnd data that can be used to guide the decision-making process for which we do the data analysis. We expect the data to contain information that can improve decisions. However, we must keep in mind that digital representations refect only a very limited part of the reality of the physical world, individual behaviors, or social interactions because only some physical events or social interactions are recorded.

Typical examples we have for nonrecorded data are, for instance, the survivorship bias, where data are only collected on events that pass some selection process. For instance, Abraham Wald conducted airplane survival analyses as part of the Statistical Research Group (SRG) at Columbia University during World War II. The placement of protection on planes should be in places in which few (returning) planes had been hit because apparently, planes that were hit in these places, such as the engine or the cockpit, did not make it back to the airfeld (Mangel & Samaniego, 1984). A similar story is told about the introduction of steel helmets in the British army in World War I. Supposedly there was a demand to stop using steel helmets because after issuing them, the number of head injuries increased greatly. The reason was that soldiers with the traditional, nonsteel head gear, when being hit by shrapnel in their head, were highly likely to be killed, and the number of injured was smaller. With the steel helmet, previously fatal injuries were not fatal anymore, so

people ended up in the hospital. Simple analyses of these data may have led to misleading conclusions, such as that steel helmets make head injuries more likely.

Also, often knowledge of the physical realm that is not represented in the data is necessary. For instance, Twitter activities can be used as an indication of the strength of a storm. Such an analysis was applied to assess the effects Hurricane Sandy had on New York City when it hit in 2012 (Shelton, Poorthuis, Graham, & Zook, 2014). This was the strongest hurricane that had hit the New York City area in recorded history. There is indeed a strong correlation between Twitter activities and the strength of a storm, but there were very few Twitter activities in the areas in which the storm was the strongest. Two causes can explain the nonmonotonic relation between Twitter activities and storm strength. Both are related to the physical realm. One is that, very often, people fee an area after they receive a hurricane warning and are told to evacuate a certain area, so they will not tweet anymore from this area. A second reason may be that storms tend to topple cellular towers. So even if people remained in the area, they may not have been able to communicate, causing a decrease in communication activity in these areas.

These are examples of nonexisting data of existing events that result from a biased or partial recording of data. They are due to the physical properties of the data collection process or of the events that generate the data in physical reality. However, the selectivity of the data does not only depend on the external statistics of the physical properties of the world. It may also result from specifc human actions that may create a somewhat partial view of reality. For instance, a study of credit card data in a country in which there was social unrest showed that the effect of the localized unrest (which mainly involved large demonstrations in specifc locations in a metropolitan area) diminished with distance from the demonstrations, as expressed in the number of purchases and the amounts of money spent on a purchase (Dong, Meyer, Shmueli, Bozkaya, & Pentland, 2018). This effect was not the same for all parts of this society. Some groups of the population showed a greater change than others. However, when interpreting these results, we need to keep in mind that we have only partial data on the economic activities in this country during this era of unrest because we only have credit card data. People in this country also use cash, and the decrease in credit card purchases may only reveal part of the picture.

Another factor that affects the digital records of behavior that can be analyzed is the fact that some behaviors will be more easily recorded while others are less so. For instance, on social media, socially desirable and high-prestige behavior will appear more often in posts than less desirable behavior. Viewers, consequently, may feel that others are more engaged in these positively valued behaviors than they themselves (Chou & Edge, 2012). Also, the digital image of the world that may emerge from scraping social media data will present a biased view, possibly overrepresenting the behaviors people like to post about on the web. Any decisions made based on these data, for instance, concerning the public investment in different facilities for leisure activities or the development of product lines for after hours, may be biased and may be misled by people's tendency to post about some things and not post about others.

Another example of the partial representation of the physical or social reality in data is demonstrated in Omer Miran's master's thesis (Miran, 2018). The study dealt with the analysis of policing activity in the UK, as expressed in the data the UK police uploaded to their website.1 Making police data openly available allows the public to monitor police activities. It also provides the basis for the assessment of the risk of crime in different areas. This can, for instance, help individuals in their decisions about where to live, rent or buy an apartment, and raise their kids.

The study aimed to determine the relative frequency of different types of crimes in different parts of the UK, where each part was defned by the specifc police station that oversaw an area. The analysis combined information from the "crime cases database" for the years 2010–2015, which includes reports of crime incidents and their locations. The most important one is the UK police database, in which all crime events are recorded with relatively rough geographical information. A second database is the database on police stop and search activities for the year 2014, also downloaded from the UK police site. Here, the location at which a person was stopped is also recorded. Two other databases were from the UK Offce for National Statistics and included population size and the average weekly for different locations.

The analysis focused on two different types of crime—burglary and drug-related crime. In a burglary, one or more people enter a location (a house, business, etc.) without permission, usually with the intention of committing theft. One can assume that a burglary will almost always be reported to the police and will appear in the records. Therefore, the number of burglary incidents in police records likely refects the actual frequency of burglaries in an area.

The second type of crime was crimes related to drugs, such as drug deals. In this case, the people involved in the crimes (such as drug deals) will usually not report their occurrence. Consequently, a drug-related crime will usually only appear in the police fles if the police make an active effort to detect it. Hence the data on drugrelated activities does not really refect the volume of such activities in an area but rather the police activity in the area.

The analyses of the data showed that there was no correlation between the amount of police activity in an area (as measured through the number of stop and search events in the area) and the number of burglary events (r = −0.047). However, there was a positive correlation between police activity and recorded drug-related crimes (r = 0.180). Thus, the two types of crime data indeed refect somewhat different types of events, namely the activity of criminals (in the burglary data) and the activity of the police (in the drug-related crime data). These two types of activities can, of course, be correlated or can be related to other variables that characterize the location.

The analysis of the police databases revealed additional clear differences between the picture of reality they provide and the actual reality. In the UK Home Offce drug survey for 2013, 2.8% or 280 out of 10,000 adults aged 16 to 59 reported using

<sup>1</sup>See https://data.police.uk.

illicit drugs more than once a month in the last year. Assuming that these people purchased drugs once a month, they were involved in approximately 12 \* 280 = 3360 drug deals in a year. In the UK police data set, the yearly average of drug-related crimes per year was about 28.7 per 10,000 people. Clearly, less than 1% of drug deals appear in police data. This demonstrates the large potential gap between the image of the reality that appears in the analysis of data and the actual reality this image is supposed to refect.

#### **Conclusions**

The availability of data can have great value for decision-making. For instance, data-based decisions may lower the effects of biases due to faulty preconceptions or naïve beliefs. Also, many processes, such as controlling large-scale networks or high-frequency trading in fnancial markets, are only possible with algorithms and must rely on data.

The use of data science and AI in decision-making can often provide valuable information, but the process is not without potential problems. One needs to keep in mind that the data analysis process is a human activity that involves numerous decisions along the way. Each of them impacts the following steps in the process and the eventual outcome. It is important to monitor these decisions and to test the sensitivity of the conclusions to specifc changes in the decisions made along the process. Furthermore, the analytics process often concerns human activities. The records they generate depend on the decisions of those who do the recording and, to some extent, the people whose behavior is recorded.

The development of data-based decision-making or support tools requires a combined modeling effort. On the one hand, the usual analytics modeling process needs to proceed, aiming to generate models that can identify the preferable choices in different settings. A model in this context would be the output of the algorithm used for the analytics process, together with information about the quality of the output, compared to some criterion. Often this would result from tests of the model, computed on a training set of data, on a separate, independent data set, the test set. An additional output of the algorithmic process can be information on feature importance, identifying the relative importance of different variables for predicting the outcome variable.

This should be accompanied by a modeling effort that develops more traditional social sciences models based on psychological, sociological, economic, or other disciplines. These models can be used to model the behavior that is related to the analytics process (choices made regarding the questions asked, the selection of the data, the preprocessing of data, the choice of algorithms and their parameters, the presentation of results, the interpretation, the implementation of insights gained). The models can also be related to behaviors that generate the data that is analyzed, as shown in the examples of drug-related crimes or social media posts during emergencies.

Thus, traditional modeling techniques and data science methods should be combined. Such a combination has the potential to better decisions and utilization of data. One can take several steps to achieve this goal. First, data scientists (who often have computer science, mathematics, or engineering backgrounds) should be trained in social sciences. This would give them some critical analytical skills that will allow them to question assumptions behind the analyses and the behaviors that are represented in the data. The data scientists would detach themselves from the mechanistic process of taking input, running analyses, and interpreting the results only in terms of the input variables and the model output, with the feature importance tables and other output data. Analyses of results in view of theories in the social sciences can provide a deeper understanding of phenomena beyond what is possible with a-theoretical analyses.

Also, interdisciplinary teams should analyze, evaluate or implement the results of data science processes that are used in decision-making. The output of these process needs to be critically assessed, and the value of the insights gained through the process needs to be calculated. It is important to determine how the information can actually be implemented in the operation of the organization. This requires the conduct of sensitivity analyses that evaluate the procedures and their robustness.

A critical view of the analytics process and of the implementation of its results is particularly important because data-science-based decision support always depends on the particular data that served as input for the algorithm. Dynamic changes in the data may cause predictions to become less (or sometimes more) precise. The relevance of the data for the decisions may also change with time because options become more available or less expensive or because new alternatives arise.

We need to combine traditional social science methods, such as methods in economics, political science, geography, sociology, and psychology, with the methods used in analytics and data science. There should be a dynamic interplay between the two approaches to phenomena. The combined use of the two has the potential to create a synergy that can lead to better decision-making processes and better decisions. It can also provide insights into the dynamic shaping of reality, following the use of data science, and the effects human behavior has on the data science process.

#### **References**


**Joachim Meyer** is the Celia and Marcos Maus Professor of Data Sciences in the Department of Industrial Engineering at Tel Aviv University. He holds an M.A. in Psychology and a Ph.D. (1994) in Industrial Engineering from Ben-Gurion University of the Negev, Israel. He was on the faculty of the Ben-Gurion University of the Negev, was a visiting scholar at Harvard Business School, a research scientist at the M.I.T. Center for Transportation Studies, and a visiting professor at the M.I.T. MediaLab. He studies the integration of humans into intelligent systems that utilize advanced automation, machine learning, and artifcial intelligence. He is an elected fellow of the Human Factors and Ergonomics Society.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 4 Boosting Consumers: Algorithm-Supported Decision-Making under Uncertainty to (Learn to) Navigate Algorithm-Based Decision Environments**

**Felix G. Rebitschek**

Human choice, for example, in decisions to consume goods or services or to participate in organizations and events, depends on seeking quality assured, objectively required, and subjectively needed information (Fritz & Thiess, 1986). Whereas in pre-digital days searching for information required substantial efforts, digitalization has improved information accessibility and facilitated consumers' information searches. Individual consumers, however, nowadays face comprehensive sets of information and more offers about products and services than they have the resources to navigate (Lee & Lee, 2004). Selecting information and preventing information overload have become major challenges for preparing consumer decisions (Glückler & Sánchez-Hernández, 2014).

Given the complexity and dynamics, information selection is a decision-problem under uncertainty. Distinct from problems of risk, problems of uncertainty are characterized by a lack of reliable evidence on choice options, the potential consequences of pursuing or not pursuing those options, and the probabilities of those consequences setting in (Knight, 1921). In contrast to non-reducible aleatory uncertainty (e.g., the next coin fip), these are problems of epistemic uncertainty that actors need to use knowledge to reduce.

Opposite the decision-maker, algorithms pre-select, curate, and personalize the decision environment. Yet these algorithms do not necessarily reduce uncertainty for the individual consumer with his or her information needs. Instead, nontransparent, dynamic, and responsive decision environments often seduce (dark patterns) or nudge towards certain options and can provide the individual with relatively inferior recommendations or choice sets as compared to static, non-responsive

F. G. Rebitschek (\*)

Harding Centre for Risk Literacy, Faculty of Health Sciences Brandenburg, University of Potsdam, Potsdam, Germany

Max Planck Institute for Human Development, Berlin, Germany e-mail: rebitschek@uni-potsdam.de

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_4

consumer decision environments with transparent options (Mathur, Mayer, & Kshirsagar, 2021). Data-driven behavioral control is unlikely to support informed consumer decision-making.

Informed decisions in Western industrialized countries (i.e., the German healthcare system by law (Deutscher Bundestag, 2013), constitute from an individual, who weighs the possible harms and benefts of alternative courses of action according to the best available evidence. Informed participation in algorithm-based environments (Gigerenzer, Rebitschek, & Wagner, 2018), moreover, requires continuous interaction with beneft-harm ratios that change dynamically due to external (e.g., the algorithm is modifed by the provider) and internal (e.g., responding to one's past decisions) factors. Thus, besides understanding the benefts and harms of consuming or not consuming within a decision environment, grasping how the personal beneft-harm relationship changes dynamically can be crucial. Which strategies or rules need to be taught to consumers so they can reduce the uncertainty of challenging decision problems and are more likely making informed decisions?

Algorithms can support decision-making under uncertainty. One class of algorithms or models that boosts the decision-maker's competencies (Hertwig & Grüne-Yanoff, 2017) are fast-and-frugal decision trees (FFTs) (Martignon, Vitouch, Takezawa, & Forster, 2003). This type of algorithm aims to reduce a decision process to a handful of the most predictive combinations of features, termed cues. Consumers can robustly classify decision options (e.g., determine whether an informed decision is possible) by independently checking the presence or absence or level of those cues. Accordingly, the tree comprises classifcations, decisions, or actions. Each cue comes with a branch either to the next cue or to an exit (e.g., a decision). In contrast to decision trees generally, FFTs involve no branching—apart from the last cue, which branches into two options (Martignon, Katsikopoulos, & Woike, 2008). From their structure, users glean that they can ignore further information, which makes FFTs a type of formal heuristics (Gigerenzer & Gaissmaier, 2011). Researchers in fnance (Aikman et al., 2014), medicine (Green & Mehr, 1997), psychiatry (Jenny, Pachur, Williams, Becker, & Margrafm, 2013), and the military (Keller, Czienskowski, & Feufel, 2014) have shown that FFTs enable fast and reliable decisions—they perform similarly to more complex models (e.g., a logistic regression, random forest tree, and support vector machine).

As interpretable models that are transparent and educate those who use them, fast-and-frugal trees boost citizen empowerment (Harding Center for Risk Literacy, 2020a). They can be used as a graphically developed tree structure both digitally in apps, on websites, and analogue on posters and brochures. This facilitates them to be integrated in consumer decision-making. In a nutshell, fast-and-frugal trees lend the expert's view on a problem of uncertainty, providing a heuristic highly valid cue combination with which consumers separate the wheat from the chaff.

In the following section, I describe selected expert-driven decision-tree developments from the consumer research project RisikoAtlas (Harding Center for Risk Literacy, 2020a). The developed tools boost consumers when facing decisions under uncertainty across different domains: distinguishing between opinion and news; examining digital investment information; examining health information; recognizing quality in investment advice, fake online stores, and unfair loan advice; detecting conficts of interest in investment advice; controlling app data; enabling informed participation in bonus programs and credit scoring; informing telematics rate selection; and protecting data from employers and against personalized prices.

#### **Methodology**

As for any decision-support model development with instance-based learning, the developer must sample problem instances (cases), select decision cues (features) (and if faced with continuous cues, choose decision thresholds), and ensure that validation criteria are available. For FFT development specifcally, the rank of cues and related exits is crucial and must be determined before validation. There are both manual (Martignon et al., 2008) and more complex construction algorithms using machinelearning methods (Phillips, Neth, Woike, & Gaissmaier, 2017) for developing FFTs.

However, the direct application of FFT-construction algorithms presupposes that a data set with problem instances, decision cues, and validation variables is available. Yet in consumer decision-making, usable data sets are exceptions in highly dynamic and algorithm-controlled decision environments. Accordingly, developers must sample problem instances from the environment, select decision cues based on experts and literature, and collect or investigate validation variables (cf. Keller et al., 2014). Here, I outline one expert-based development process (see Fig. 4.1).


**Fig. 4.1** Development pipeline according to a "case validity" FFT construction method. Source: Adapted from https://www.risikoatlas.de/en/consumer-topics/health/examining-healthinformation. Copyright 2020 by the Harding Center for Risk Literacy. Adapted with permission

this FFT development method ("case-based cue validity") requires the developer to consider not only the number of potential cues they must take into account with their modelling, but also the prevalence of the target, for example, what the decision tree should help its user to recognize.


boruta (Kursa & Rudnicki, 2010) and the caret package in R (Kuhn, 2008). With boruta, the developer can check an individual cue's validity in so-called random forests. If a cue behaves like a random number in a tree-based prediction of the label, he or she should not select it. Because the process is based on random sampling, the assessor should not ignore prior knowledge: If a known causal relationship exists, he or she should select the cue regardless and test it in repetition with more cases. The process of coding, scoring, and statistical feature selection can be done iteratively to more effciently achieve a manageable set of robust cues. Cue coding effort and expert assessments depend on this set.


#### **Use Cases**

#### *Selecting Digital Health Information*

**Starting Point** A comprehensive amount of health information on the web gives consumers the opportunity to learn about symptoms, benefts, or harms of medical interventions. Yet the quality of digital health information varies dramatically (Rebitschek & Gigerenzer, 2020). Misleading information leads to misperception of risks and prevents informed decisions (Stacey et al., 2017). Many sites have undeclared conficts of interest. However, algorithmic curation of search results on both the web and news channels across social media platforms rarely comes with quality-dependent weighing (with countermeasures having been implemented after this study). To prevent serious consequences, consumers should be empowered to better recognize the quality of health information on the internet (Schaeffer, Berens, & Vogt, 2017).

**Goal** How can one enable readers to distinguish between digital health information that promotes informed decision-making and information that does not when they do not even seek for the potential benefts and harms of decision options?

**Cases, Cues, and Criteria** My team and I analyzed 662 pieces of health information on German-language websites (Rebitschek & Gigerenzer, 2020). Of these, 487 were collected openly by experts, from Similarweb's health catalogue, and from Google and Bing using medical condition terms (cf., (Hambrock, 2018) of diseases and instrumental terms such as "How do I recognise X?". Another 175 pieces were sampled by laypersons on given topics (vaccination against mumps, measles and rubella; antibiotics for upper respiratory tract infections; ovarian cancer screening). We artifcially enriched the sample with randomly drawn pages from websites that claim to follow the medical guideline for evidence-based health information in Germany, which is an intentional oversampling compared to a random selection. We aimed to predict the median classifcation judgments (label) of three experts per piece about whether a piece enables or prevents informed health decisionmaking (criterion). Experts stemmed from health information research, health insurance companies, the Evidence-Based Medicine Network, and representatives of health associations with professional experience in the feld of health information. We gave these experts no information about potential cues used in the study.

**Development** By adhering to the evidence-based "Good Practice Health Information" (EBM-Netzwerk, 2016) and the DISCERN standards, we identifed 31 and 39 cues, respectively, as verifable by consumers. Elimination of redundant cues resulted in 65 cues. We conducted our cue selection stepwise using statistical methods, lay- and expert comprehensibility, and usability. Finally, we considered 10 cues for modelling with R. The fnal consumer tree with four cues is shown in Figure 4.2.

**Interpretation** A warning means that one is probably unable to make an informed decision based on the piece of health information in question. There can be many reasons for this: It may be because essential information is being withheld. It may be advertising or unprofessional design. In addition, following the decision tree may lead one to the wrong conclusion, because the classifer is not perfect.

**Validation of Effcacy** By cross-validating our health information set, we showed its reliability. A cross validation of the identifed decision tree resulted in a balanced accuracy of 0.74. Following the FFT, users were warned in nine out of the ten health information instances of which experts also stated they would not have been able to reach an informed decision. Noteworthy, the decision tree only enabled users to recognize six out of the ten cases of which experts stated they could have reached a decision.

**Fig. 4.2** Fast-frugal tree to promote consumers' search for evidence-based information that supports informed health decisions. Source: Adapted from https://www.risikoatlas.de/en/consumertopics/health/examining-health-information. Copyright 2020 by the Harding Center for Risk Literacy. Adapted with permission

**Validation of Effectiveness** With a lab-experimental evaluation (N = 204, 62% female, average age 40 years), we showed that the fast-and-frugal tree supports the assessment of health information. Independent experts assessed the lay people's fndings in free internet searches for evidence-based health information. They rated users' search results on a four-point scale as worse in cases without a decision tree (2.7; a rather uninformed choice) than in those with one (2.4; a rather informed choice).

#### *Selecting Digital Investment Options*

**Starting Point** Consumers today commonly invest money on the internet, including into products of the so-called grey capital market. Direct-2-consumer investment options particularly lack the presence or advice of an expert. Potential investors must judge opportunities by relying on information either given or laterally (e.g., review pages). Many product providers are subject to less supervision than banks, for example, algorithmic advice, which aims to not be an advisor (according to German law) but is often labelled as a "robo-advisor." Transparency, even on the level required by law, is often absent, because the algorithms' architects intentionally hamper the weighing of potential gains and losses, and further risks.

**Goal** How can one enable potential investors to distinguish between digital investment options that are trustworthy, because they inform decision-making, and others aimed at blocking information, preventing the weighing of potential benefts and risks?

**Cases, Cues, and Criteria** My team and I analyzed 693 investment options on the web that were available to consumers in Germany. We searched for individual terms on Google and Facebook (bond, retirement provision, fund, investment, capital investment, return, savings, call money, securities), and after 100 options combined them with terms like interest, share, guarantee, gold, green, interest, precious metal, and ETF. We identifed a further 180 cases through lay research. Furthermore, we manually sampled individual information on project offers on crowdfunding platforms. We did not include overview pages of individual banks on various capital investments (i.e., tabular listing of key fgures on specifc investment opportunities), advisory offers by banks or independent brokers, insurance companies, and fnancial managers. We aimed to predict the median classifcation judgments (label) of three experts per offer, whether an offer enables or prevents informed investing (criterion). 42 experts with academic or practical professional experience in the design of fnance information evaluated the cases. We gave the experts no information about potential cues used in the study.

**Development** Based on various sources, we selected 138 cues, of which we considered 72 assessable in principle by laypersons after eliminating redundancies following an initial test. We conducted our cue selection stepwise using statistical methods, lay- and expert comprehensibility, and usability. Finally, we considered seven cues for modelling. The fnal consumer tree with four cues is shown in Figure 4.3.

**Interpretation** A warning means that informed investing is unlikely based on the offer in question. There can be many reasons for this: The provider could be interested in customers not making an informed decision, or the offer could be simply unprofessional. Also, following the decision tree can lead to a wrong conclusion,

**Fig. 4.3** Fast-frugal tree to promote consumers' search for trustworthy investment opportunities that promote informed investing. Source: Adapted from https://www.risikoatlas.de/en/consumertopics/fnance/examining-digital-investment-information. Copyright 2020 by the Harding Center for Risk Literacy. Adapted with permission

because the classifer is not perfect. Using the tree produces no insight into the quality of the offers themselves.

**Validation of Effcacy** By cross-validating the identifed decision tree, we revealed a balanced accuracy of 0.78. Users are able to detect eight out of ten offers that enable informed investing, and reject seven out of ten because they do not enable informed investing.

**Validation of Effectiveness** With a lab-experimental evaluation (N = 204, 62% female, average age 40 years), we showed that an early version of the fast-andfrugal tree supports the search for consumer investment options on the web. Independent experts on fnance investments assessed the lay people's fndings of investment options. They revealed that 385 out of 490 offers did not allow for informed investing. Although providing the tree did not let participants more often choose the rare options where they could invest on an informed basis, they at least became much more careful with investing in general, reducing the median initial hypothetical investments from 1000 to 500 € (retirement saving) and from 2500 to 1000 € (wealth accumulation).

#### *Distinguishing News and Opinion Formats*

**Starting Point** Social media users are more likely to like and share fake than real news, which is directly linked to the confguration of the algorithmic distribution. Consequently, algorithmic-based news coverage leads to misconceptions and makes social exchange more diffcult. As fake news detection is challenging, a frst step is to support users in distinguishing between news formats and opinion texts.

**Goal** How can one enable users to distinguish opinion formats and real news on social media and on websites?

**Cases, Cues, and Criteria** We fully analyzed 558 texts from German-language websites. Our topic selection based on fact checkers included "migration background," "chemtrails," "contrails," "Islam," "Muslims," "Israel," "cancer," "unemployed," "gender," "Russia," "VW," "left-wing extremism," "autonomists," "right-wing extremism," "money," and "climate." We complemented searches on Bing News, Google News, Facebook, Twitter, and those conducted with Google's "auto-complete" function with individual texts from the fake news portals described earlier. We were aiming to predict the median classifcation judgments (label) per text of three journalists with professional experience in print and digital media about whether the text's authors had satisfed or violated professional standards of the journalistic format "news" (criterion). We gave these experts no information about the potential cues used in the study.

**Development** Based on various sources, we collected 86 cues, of which we considered 50 to be basically verifable by laypersons. We conducted our cue selection stepwise using statistical methods, expert comprehensibility, and usability. Finally, we used ten cues to model the satisfaction of journalistic standards. The fnal tree with four cues is shown in Figure 4.4.

**Interpretation** A warning means that the text violates professional journalistic standards of the news format. Examples are advertising, unprofessional texts, opinions such a commentary format, a satirical format, or so-called fake news. In some cases, those following the decision tree may reach the wrong conclusion, because the classifer is not perfect.

**Fig. 4.4** Fast-frugal tree to help consumers classify news and opinion pieces. Source: Adapted from https://www.risikoatlas.de/en/consumer-topics/digital-world/distinguishing-betweenopinion-and-news. Copyright 2020 by the Harding Center for Risk Literacy. Adapted with permission

**Validation of Effcacy** Cross-validating the decision tree, we reached a balanced accuracy of 0.76. Those following the decision tree recognized nine out of ten texts that were defnitely not news as such, and similarly confrmed more than six out of ten real news texts.

**Validation of Effectiveness** With a lab-experimental evaluation (N = 204, 62% female, average age 40 years), we showed that 85% of laypeople applying the fastand-frugal tree on 20 texts memorized all of the tree's cues with a short delay. Providing participants with the tree increased the overall classifcation accuracy from 74% to 78%, with a major advantage in confrming real news from 74% to 83%.

#### **Discussion**

Highly uncertain, non-transparent algorithm-controlled decision environments pose a threat to informed decision-making. Researchers have established that consumers are aware that the algorithms informing their decisions are imperfect, for example, in credit scoring, person analysis, and health behavior assessment (Rebitschek, Gigerenzer, & Wagner, 2021b). Yet consumers need more than awareness—they need applicable and educative tools (empowerment) to help reduce uncertainty.

With the help of three use cases, I have shown that fast-and-frugal decision trees can help users to distinguish quality-assured information from other pieces. Although effcacy in terms of absolute classifcation accuracies seems to be moderate, three arguments are relevant for their interpretation. First, to the best of my knowledge consumer support tools, at least in Germany, have never been validated with such empirical tests (e.g., for an overview over health information search support, see (Rebitschek & Gigerenzer, 2020). Thus, no one knows whether tools that are more accurate are even available. Second, a benchmark of absolute numbers is less relevant than a relative improvement over the current situation. This leads to the most important point, the validation in terms of effectiveness: The decision-makers in our studies made somewhat better choices and learned something given the moderate effcacy.

Thus, researchers within the feld of consumer education should consider public engagement when developing uncertainty-reducing decision-support tools. FTTs are promising tools for boosting consumer competencies (Center for Adaptive Rationality, 2022), for instance for direct investment on the internet, in fnancial advice, or in the informed choice of a telematics tariff. They have been disseminated with a consumer app (Harding Center for Risk Literacy, 2020b). The next step has to be a pipeline for organizations that aim to protect citizens or consumers to develop and update them on a regular basis.

Even competence-promoting decision trees are always a temporary solution: Environments are dynamic and cues lose their predictive validity over time. Furthermore, transparent decision-support tools can be subject to gaming when information providers and decision architects consider merely fulflling a desired cue status rather than actually improving the offers. Architects should not only consider including only causally related cues that cannot be gamed easily, but also subject their products to continuous updates.

As for any decision-support algorithm, the FFTs' limitations lie in their imperfect performance (classifcation errors). Therefore, actors must determine their follow-up actions carefully. In addition, the procedural fairness of information or products can be insuffcient (i.e., when female consultants have a higher risk of misclassifcation), which needs to be controlled for every tree. Finally, decisionsupporting tools, particularly algorithm-based decision making, set new norms. They inhere certain normativity. The importance of chosen criteria and cues can generalize, including to human decision-making. In addition, those introducing an algorithm cannot guarantee its implementative effectiveness in terms of side effects, adverse events, and compensating behavior. Therefore, the most crucial factor for consumer empowerment algorithms—those which pre-select, curate, and personalize content, information, and offers—is regulatory examination. Empowerment and transparency have clear-cut limits, particularly in markets of data-driven behavioral prediction and control (i.e., consumer scoring (Rebitschek et al., 2021a), which helps defne regulatory initiatives. This in turn emphasizes that regulation of knowledge and technology settles on the extent to which consumers become literate, to shape the participatory political and societal discourse on algorithm-based decisionmaking—the actual goal of empowerment.

**Acknowledgments** This work is part of the RisikoAtlas project which was funded by the Federal Ministry of Justice and Consumer Protection (BMJV) on the basis of a resolution of the German Bundestag via the Federal Offce for Agriculture and Food (BLE) within the framework of the Innovation Programme.

#### **References**


**Dr. Felix G. Rebitschek** is Head Research Scientist and CEO of the Harding Center for Risk Literacy. Taking into account the nature of decision problems, his research aims at identifying accessible problem representations in risk communication in order to empower decision-makers, for example with educative interventions about digital health information. The associate researcher of the Max Planck Institute for Human Development in Berlin is cognitive psychologist with a focus on algorithm-based decision-making and he trains health professionals and managers on risk communication, risk literacy, and decision making under uncertainty internationally.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 5 The Datafcation of Knowledge Production and Consequences for the Pursuit of Social Justice**

**Nancy Ettlinger**

The persistence of data-science1 practices that commonly result in injustices, especially for minoritized populations, is puzzling. A large, critical literature on algorithmic governance associated with advances in the digital sciences aptly identifes biases and limitations of big-data analyses and prescriptions, how these problems have conditioned life in the digital economy, and the destructive, uneven, and unjust effects, but we nonetheless lack an explanation for why and how this dire situation remains tolerated and continues relatively unabated. Alongside climate change, I regard deepening socio-economic polarization worldwide and the perpetuation of systemic racism as crucial existential problems that demand critical attention. Broadly, this paper contributes to explaining one dimension of our societal predicament, namely the *persistence* of the production and deepening of inequality and injustice through data-science practices despite abundant evidence of their destructive effects.

Based on a critical synthesis of literature from the interdisciplinary feld of critical data studies,2 education studies, economic geography and innovation studies, I develop several interrelated arguments. I locate the problem of toleration and

N. Ettlinger (\*)

© The Author(s) 2024 79

<sup>1</sup>Throughout this paper I refer broadly to "data scientists" unless I refer to specialists of a particular subfeld (notably in the penultimate section), and to the "data sciences", which encompass a range of subfelds such as data analytics, data science, visualization, software engineering, machine learning, and artifcial intelligence (AI).

<sup>2</sup>The interdisciplinary feld of critical data studies examines the problems of digital life and effects of algorithmic government (e.g., Dalton, Taylor, & Thatcher, 2016; Iliadis & Russo, 2016). It crosscuts the social sciences (including digital geographies), humanities (critical digital humanities), and law (the intersection of critical legal studies and data studies).

Department of Geography, Ohio State University, Columbus, OH, USA e-mail: ettlinger.1@osu.edu

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_5

normalization of new, digital forms of injustice in the production of knowledges that is crystallizing in educational institutions, which broadly shape thought processes. The unfolding of algorithmic governance in the new millennium has pervaded the education sector, specifcally regarding the burgeoning "edtech" industry, an assemblage of apps, devices, software, hardware, and platforms designed to *datafy* student knowledges (Witzenberger & Gulson, 2021); that is, it quantifes knowledges for the purposes of analysis and manipulation for proft (Mayer-Schönberger & Cukier, 2013; van Dijck, 2014; Zuboff, 2019). This approach to knowledge production is accompanied by a particular pedagogy, which I argue inculcates values that are conducive to technocratic thinking, and frames knowledge generation in decontextualized, non-relational terms, thereby prefguring social injustice in a world beset with intensifying societal tensions and polarization. Contextualization and relationality through the lens of social justice are crucial missing links that would permit actors to situate their actions (Haraway, 1988); they signify key mental capacities and related practices that enable subjects to connect abstract ideas with on-theground processes across time and space and to recognize the power relations that lace social relations (e.g., Ettlinger, 2003; Massey, 2005; Yeung, 2005). Further, contextualization and relational thinking through a social-justice lens position people to situate their thoughts and practices *responsibly*, with attention to the relation between one's own practices and those of others, the context that one's practices affects, and consequences. The edtech industry aligns with the logic of algorithmic governance under the regime of big data3 insofar as it eschews causality to prioritize correlations of decontextualized data; the pedagogy accompanying the edtech industry follows suit, and as I will show, conceptualizes knowledge generation in terms of what students can *do*, a matter of performance, without attention to whether they ought to do what they do, and the situation of their actions relative to a chain of activity and associated effects. Despite the problems, a celebratory discourse casts the new education paradigm as a "disruptive innovation" (Christensen, 1997; Christensen, Horn, & Johnson, 2008) that delivers a new and improved learning experience. However, this laudatory discourse itself lacks contextualization, resulting in misconstrued claims about the datafcation of knowledges that cast new technology (edtech) as the catalyst for a new, improved pedagogy. I show how the prevailing, problematic pedagogy is longstanding, predating digital technologies of the new millennium, although its target population has changed over time from minoritized groups in the United States in the twentieth century to the entire population in the new millennium in the United States and worldwide. Finally, I

<sup>3</sup>We currently are in the second wave of artifcial intelligence (AI). Under this regime, knowledge production requires big data, in contrast to the way human beings learn, which requires only a few observations. AI researchers pioneering the next wave of AI strive to render knowledge production the same as for human beings, based on a few observations—a feat that would obviate the need for big data as well as supercomputers, which big data requires. Before the rollout of generative AI, estimates for the arrival of this new wave ranged from 10 to 100 years, while some AI scientists are agnostic (Ford, 2018); generative AI signifes a bridge to the next wave and likely will expedite transformative processes.

conceptualize education as an upstream institution, which enculturates subjects in a mode of knowing and thinking that affects downstream applications in daily life, and I examine the ways in which the decontextualized and non-relational character of the prevailing pedagogy governs unwittingly irresponsible practices.

I begin below with a brief background on problems with algorithmic governance generally, and subsequently I extend the issues to the education sector regarding the burgeoning edtech industry and the prevailing pedagogy. The main focus is on the United States, although as I explain in the conclusion, the issues are pertinent worldwide, recognizing that problems and processes materialize differently across space relative to variation in institutional confgurations and social, cultural, political, economic, and ecological histories. The next section situates the celebratory casting of the current trajectory in education as a "disruptive innovation," and explains how this discourse obfuscates realities. The following section pursues a brief genealogy4 of the so-called "new" pedagogy to demonsrate the fallacy of the technology-frst approach of celebratory discourses of technocracy as well as some not-so-apparent logics entangled in the current educational trajectory and concerning paradoxes and twists that have delivered the new learning paradigm. The penultimate section engages downstream effects of the upstream inculcation of technocratic values. Concluding comments pertain to the datafcation of knowledge production relative to broad societal problems.

#### **Background: Algorithmic Governance and Its Discontents**

Just in the infancy of the digital era, we are witnessing the normalization of undemocratic, often devastating effects of technological advance. The problems are rooted not in a particular project, but rather in their diffuseness throughout the fabric of society. Datafcation entails the extraction of data from individuals' digital footprint, without consent of, or payment to, digital subjects, thereby enacting routine erosion of basic privacy rights, continual surveillance, and exploitation of subjects by capitalizing on their personal data (Thatcher, O'Sullivan, & Mahmoudi, 2016; van Dijck, 2014; Zuboff, 2019). People interact with the internet in wide-ranging ways in daily life through, for example, internet searches; social media; smart devices ranging from phones and appliances to children's toys and adults' sex toys; the almost two million apps available worldwide that assist people with everything from transportation and shopping to mediation and menstruation tracking; platforms for work as well as consumption; and the internet of things (IoT), which embeds digital technology such as sensors or software throughout the environment to connect and exchange data for widespread activity, from energy usage to credit and fnancial information more generally. The pervasiveness of digital technology

<sup>4</sup> I use Foucault's (1998) sense of "genealogy" to historicize current problems in terms of various non-linear paths that have produced the present.

in the increasingly interrelated realms of social media, home, work, leisure, and intimacy5 refects our immersion, willing or unwilling, conscious or unconscious, in digital systems in daily life.

Routine data extraction without consent is orchestrated by big-tech frms, whose motive is proft, which supersedes other possible motives such as fairness, equity, transparency, and basic privacy. Beyond objectifying problems such as invasion of privacy, surveillance, and exploitation, deleterious subjective effects include addictive habits as algorithms nudge users6 into continued use of digital-era accoutrements such as phones, social media, and apps to ensure continued usage and, therefore, profts (Chun, 2017; Cockayne, 2016; Ettlinger, 2019). Emblematic of the prioritization of proft is the mundane example in online shopping of the profusion of choices, which are designed not with the user in mind, but rather to increase usage time in the interest of proftability (Sullivan & Reiner, 2021, p. 418), an instance of what media scholar Simone Natale (2021) considers the deceitfulness of media in the digital era.

Governance in general has become reliant on algorithmic designs that embed biases relative to longstanding societal hierarchies resulting from classism, racism, misogyny, homophobism, xenophobism, ableism, ageism. Beyond the problem that biases exist in the real world and therefore exist in designs (Christian, 2020; Crawford, 2021), the overwhelming constitution of the data sciences by privileged white men – the "diversity crisis" – feeds bias-driven problems (Crawford, 2016; Snow, 2018). Urban planning around the world, especially in association with "smart planning," is designed, orchestrated, and implemented by tech frms, for proft, while government steps in as a partner to legitimize the inscription of smartness on the landscape, unevenly. Smart-city applications commonly are sociospatially bifurcated, with systems intended to provide information and nurture entrepreneurialism in downtowns, whereas a system of punitive surveillance targets underserved communities of color (Brannon, 2017) governed by a "digitize and punish" mentality that unjustly targets marginalized communities (Jefferson, 2020). More generally, smart-city planning guided by the corporate sector tends to be piecemeal, focused on disparate for-proft projects related to compartmentalized problems such as parking and transportation, IoTs in downtowns and select places of "opportunity," as opposed to a coherent plan to work towards a more socially and environmentally sustainable future throughout an urban social and political economy (Cugurullo, 2019). Algorithms inform the public-private planning complex and agents of the real-estate industry where to invest, as well as where to disinvest, notably in the same communities targeted for punitive surveillance (Safransky, 2020) while evidence of racialized bias in mortgage approval algorithms mounts (Martinez & Kirchner, 2021). The rise of big-data policing has generated a system designed to preempt crime by criminalizing marginalized individuals before crimes

<sup>5</sup>These realms increasingly are interrelated as advances in digital technology have blurred the spatial division among them and reconfgured their relation (Richardson, 2017, 2020).

<sup>6</sup> Interestingly, "users" conventionally refers to *drug* users, addicts, and readily became the moniker for digital subjects.

are committed, an insidious reversal of the "innocent until proven guilty" hallmark of democracy (Brayne, 2021; Ferguson, 2017). Everyday decisions ranging from judicial to hiring, fring, credit approval, and scheduling routinely discriminate based on race/ethnicity, gender, sexuality and their intersections (Pasquale, 2015). Echoing the perpetuation of life under Jim Crow, mundane practices such as drinking from an automated water fountain or washing one's hands in a lavatory with automated soap dispensers require being white because the sensors are not designed to recognize Black skin (e.g., Benjamin, 2019). Search engines embed racist and sexist values (Noble, 2018). Algorithmic governance overall unjustly targets marginalized populations relative to multiple axes of difference and their intersections, prompting new vocabulary such as the "digital poorhouse" (Eubanks, 2017) and "weapons of math destruction" (O'Neil, 2016).

Conceivably, one might argue that the well-worn path of neoliberalism7 as well as racism and many other "isms" are devoid of ethics, judiciousness, and suffciently restrictive regulatory policy, and therefore the apparent absence of such values in the new millennium is nothing new. However, pernicious mentalities are not accomplished facts; they are ongoing processes. Although systemic injustice is longstanding worldwide, it takes on different forms and manifests in different practices across contexts. The pertinent question is not whether injustice lies in the domain of continuity *or* change, but rather how the processes by which persistent injustices have changed, an approach that can inform ways to tackle problems, challenge mentalities, and pursue alternatives.

Concerned critics within the data-science community have called attention to a vacuum of ethical thinking (Floridi, 2015). On the other hand, critical media scholar Mark Andrejevic (2020) has argued that the fundamental problem pertains not to ethics but rather to a crisis in judgement that has resulted from the automation of judgement linked with the automation of media as well as of sociality and the dismantling of people's shared sense of community. Critical media scholar Kate Crawford (2021) similarly has argued that the focus on ethics is problematic, although for different reasons and with different conclusions. She argued that a focus on power brokers of twenty-frst century technologies, from big-tech frms to universities, can curtail algorithmic violence through the development of appropriate regulations (see also Pasquale, 2015). However, calling for ethical thinking, lamenting lack of judgement, and calling for policy to reign in major actors complicit in the sins of artifcial intelligence (AI) applications all beg the question as to *how the logic that permits tolerance of unjust, data-driven, technocratic solutions has become ingrained in digital subjects' minds*. I concur that the automation of judgement poses profound problems, and I endorse attention to both ethics and regulatory policy, but I argue that constructing real change at some point must

<sup>7</sup>Although the path of neoliberalism is well worn, its time span is open to question. The Marxist narrative pins the emergence of neoliberalism to the 1980s, the Reagan-Thatcher era (e.g., Harvey, 2005; Peck & Tickell, 2002); Foucault (2008), on the other hand, considers neoliberalism to have a much longer history relative to the rise of modern states; see also Jones (2012).

engage *the systemization of a mode of knowing that renders unjust, data-driven, technocratic solutions persistently tolerable by society to the point of normalization, a matter of a societal-scale subjectivity*. I ask how in the process of the smartifcation of society a mode of knowledge production developed that bypasses ethics, judiciousness, and sense of citizenship and community.

#### **Education in the Digital Era**

Although rarely called by its name, a pedagogy called "competency-based education and training" (CBET) prevails in the United States and around the world, while its "carrier" across all institutions currently is the edtech industry, the vehicle by which technology mediates CBET tenets. One value of online education promulgated by the edtech industry is that it can be customized, personalized, relative to students' needs, and this customized aspect of the current system has long been central to CBET pedagogy. Those who can complete assignments rapidly can do so, and those who need more time are accommodated. The discourse on the new education features the effciency of the self-paced learning system by de-standardizing the learning process insofar as it puts students in control of their learning. The approach shifts the role of instructor from "a sage on the stage" to "a guide on the side," rendering instructors facilitators of the management of information (King, 1993). Online education in turn renders students entrepreneurs of their own education, responsible for their progress in a new round of neoliberal practices.

In addition to the personalization component, CBET departs from evaluating students on what they know, and instead prioritizes performance—what students can *do*. Students in a CBET system demonstrate mastery of predetermined competencies, expressed in terms of expected learning outcomes (ELOs), which are assessed quantitatively. One fundamental problem, however, is that teaching for the learning outcome, like "teaching for the test," can leave considerable gaps in people's thinking. Just as different processes can result in the same pattern, a "right" answer can derive from different logics, with potential problems downstream in application. Further, the focus on skills and what people can "do" relegate contentoriented, contextual knowledges to secondary status, relevant only if such knowledges are useful in the performance of a task (Hyland, 1997). For example, a task such as the construction of hot spots of crime in a city requires no contextual knowledges regarding uneven surveillance; uneven arrest patterns across a city demonstrate the constructed nature of hot spots, which in turn unjustly stigmatize places and the people who live there (Jefferson, 2017). Skills—"doing"—while valuable and necessary, represent partial knowledges that lack connection with conceptual frameworks guiding action. The construction of hot spots, for example, conceptualizes places as bounded, without connection or relevance to other places across a city and beyond. Focusing singularly on tasks and the skills required to perform them neglects contextual and conceptual knowledges that enable a student—or downstream, a worker—to raise questions and critically evaluate the tasks they execute that implicitly are part of larger societal projects that may deliver injustices.

Despite these problems, CBET in the United States exists in various forms both in traditional postsecondary institutions with tenure as well in the private sector. The landscape of education is changing rapidly, although unevenly. Change is slowest in traditional colleges and universities that reward students for their "seat time" with credit hours towards courses and degrees, rather than exclusively on mastery of ELOs;8 however, incipient changes in the current context are evident in a new fervor over certifcates that can be independent of degrees.

CBET in traditional colleges and universities is occurring on a piecemeal, experimental basis, notably regarding the specifcation of ELOs and increased accountability. In these institutions, CBET has been adapted in academic departments to the needs and demands of disciplinary issues in the longstanding structure of courses, majors, and degrees. The ELOs and profciencies provide a vehicle for examining effectiveness of teaching, and potentially offer a blueprint for substitute teaching when researchers buy themselves out of courses, take a sabbatical, or spend time in the feld or visit another institution. Student work on university learning platforms enable the datafcation of their performance for assessment purposes, although at the time of the writing of this paper, this aspect of CBET tends to be optional in traditional colleges and universities, even if seductive because of the automation of grading that relieves instructors of evaluation.

In contrast, CBET in its purest form, which encompasses personalization, is unconstrained by the curricular structure of non-traditional postsecondary institutions, rewarding students for their mastery of ELOs, accountable quantitatively, and pursued among students online through self-pacing. Emblematic of "pure" CBET in the new millennium, Western Governors, a thoroughly online, private university, began enrolling students nationwide in 1999 in self-paced programs designed for working adults.

New universities such as Western Governors entered the new millennium offering an educational alternative to traditional postsecondary education that solved both space and time problems for working adults in the context of precarious work. The shift from the salience of a primary to a secondary labor market associated with the decline of Fordism in the last quarter of the twentieth century produced what

<sup>8</sup>Data from the National Center for Education Statistics shows that in the Fall of 2019 the percentage of undergraduates enrolling exclusively in online courses was considerably higher in 4-year private, *for-proft* degree-granting institutions (68%) compared to those enrolling in exclusively online courses in 4-year private non-proft degree-granting institutions (17%) and those enrolling in 4-year public degree-granting institutions (10%) (see Figure 6 in National Center for Education Statistics, 2023). However, these data signifcantly undercount the overall percentage of students enrolled in exclusively online courses because the data are drawn only from degree-granting institutions. The data do not include, for example, students enrolled in online courses outside degree programs, either stand-alone courses or courses in certifcate as opposed to degree programs. To date, conventional reporting systems have not incorporated new developments in the education sector such as the development of the edtech industry, which encompasses frms that offer coursework.

labor studies scholar Guy Standing (2011) called "the precariat," an internally heterogeneous class of people across wide-ranging occupations experiencing high levels of under-employment, job and wage insecurity. In the context of the digitalization of jobs in the new millennium, labor studies scholar Ursula Huws (2014) dubbed the burgeoning digital labor force "the cybertariat," an extension of the internally heterogeneous precariat into the digital realm in which insecure and unjust conditions of the precariat have deepened (see also Ettlinger, 2016). The market for education in the new millennium thereby has encompassed underemployed adults across racial/ethnic, gendered, sexual, and aged axes of difference. Minoritized populations continue to bear the harshest burdens and injustices of the new economy (Cottom, 2020), while the general circumstances of precarity also characterize those of the previously privileged. By 2013, one-third of undergraduate students in the United States were over the age of 25, many of whom were working women with diverse responsibilities (Burnette, 2016). Enrollment in traditional colleges and universities declined because working students lack the time and money to dedicate four or more years continuously to education. The consequent decline in tuitionbased revenue occurred concurrently with diminishing public investment in postsecondary education. Traditional colleges and universities responded to the changing context by increasing tuition fees, which in the new millennium amounted to twice as much educational revenue as in the 1990s (Gallagher, 2014; Weissmann, 2014). Ironically, the short-term, bottom-line thinking behind the tuition increases exacerbate circumstances in the long run because the costs of tuition have become unmanageable in the context of precarious work. Increasing numbers of young adults now seek alternatives, and nearly all "non-traditional" students, 90%, now take courses online (Rabourn, Brcka-Lorenz, & Shoup, 2018). Fully online courses enable working students and those with domestic responsibilities to access a postsecondary education they can complete at their own pace and without the requirement to leave work to arrive at a fxed space on a university campus. Focusing on professional felds such as IT, health and nursing, business, and teaching, new institutions in the new millennium emerged to provide training and certifcation at a fraction of the cost of traditional colleges in response to the changing student "market".9

In the scramble to expand their market, traditional colleges and universities have developed new strategies. Many have incorporated distance learning into their curricula, which solves the space problem, yet leaves the time issue unattended because distance learning still requires working students to reserve time in their day for online classes. Leading private universities in the United States such as Harvard, MIT, and Stanford pioneered the next curricular innovation: Massive open online courses or MOOCs, which, like Western Governors University, solve both space and time problems. MOOCs have been branded as "high end" due to the prestige of the private institutions through which they are developed and delivered, and the internationally renowned professors who prerecord lectures; evaluation is

<sup>9</sup>Western Governors' website (https://www.wgu.edu/fnancial-aid-tuition.html#\_) indicates that as of August 2023 the average bachelors tuition is \$8,010, compared with \$16,618 nationally, and masters tuition at \$8,444, compared with \$19,749 nationally.

automated and students pursue courses online, anytime, at their own pace per the CBET personalization model. The "massive" in the MOOC model refects the *global* crowd of students that these courses target in association with a modernization discourse regarding the diffusion of high-end education throughout the world, encompassing low-income countries. However, MOOCs have been unsuccessful at both retaining students in all countries and attracting students from underdeveloped world regions. Only a third of MOOCs students come from low-income countries. Just over 3% of enrolled students in MOOCs through MIT and Harvard from 2012 to 2018 completed their courses in 2017–2018, the end point of a downward trend from 6% in 2014–15 and 4% in 2016–17; and almost 90% of students who enrolled in a MOOC in 2015–16 did not enroll again (Reich & Ruipérez-Valiente, 2019). These serious problems prompt questions regarding the value of the new education paradigm.

New universities such as Western Governors as well as MOOCs in private, traditional universities now compete with edtech frms, encompassing startups, middlemarket companies, and publicly traded companies that service elementary, secondary, and postsecondary institutions. Established edtech frms such as Coursera, Pearson, Udacity, and Edx that have collaborated with traditional colleges and universities by supplying them with platforms and apps now also offer their own courses and certifcates (Mirrlees & Alvi, 2020),10 and edtech also now encompasses massive open online course *corporations* (MOOC*C*s) that work with professors at traditional universities (Mirrlees & Alvi, 2020). Further, the edtech sector has spawned a generation of "meta edtech" frms that monitor, evaluate, broker relations among stakeholders, and shape the direction of the industry (Williamson, 2021). "Meta edtech" also encompasses "evidence intermediaries," which provide platforms that evaluate commercial edtech products and services for schools and parents. Another type of "evidence intermediary" is market intelligence frms such as HolonIQ, which offers global "educational intelligence" that assesses the market value of edtech companies as well as world regional markets and their potential for edtech investment (Williamson, 2021).

Strategies for the delivery of technologically mediated education vary from a blend of labor and capital-intensive to thoroughly capital-intensive approaches. "Blended learning" is a combination of synchronous and asynchronous educational delivery, and private-sector edtech frms emphasize asynchronous education while offering a brief "bootcamp" approach to satisfy a synchronous learning component (Perdue, 2018). The brief time required for in-class, "bootcamp" learning caters to working adults with little time to leave work, while the asynchronous approach is amenable to a "plug and play," standardized approach to courses taught across institutions to minimize set-up costs. More generally, non-traditional educational establishments initially met the high costs of incorporating educational technology in the learning enterprise by reducing labor costs, specifcally by jettisoning the

<sup>10</sup>Critical media technology scholars Tanner Mirrlees and Shahid Alvi (2020, p. 64) anticipate a decline in the number of these frms, refecting an increase rather than a decline in their power as a matter of consolidation.

professoriate and implementing a Taylorist division of education into tasks for nontenure track, low-paid education professionals scattered across various functions such as instructional design, assessment, counselling, and subject-matter development (Berrett, 2016). Labor-market optimists might argue that such new developments represent a case of "creative destruction" because new types of jobs have been created to replace single positions. However, the low pay and untenured, insecure, nature of the new jobs refect the casualization of academic labor, which inevitably will pervade traditional colleges and universities, even if at a much slower pace than in non-traditional institutions.11 By the second decade of the new millennium, the edtech industry has incorporated fully capital-intensive methods with automated teaching and evaluation, early stages of AI tutors, and blockchain technology to write and validate student transactions across institutions.

The imminence of AI tutors as a norm is concerning because AI currently lacks the capacity for explanation and contextualization; it can describe, yet with diffculty because decontextualized correlations often result in spurious conclusions, such as Black Americans misidentifed as gorillas or an overturned school bus on a road misidentifed as a snowplow. Further, the binary foundation of algorithmic logic aligns with a "right"/"wrong" approach to evaluating student performance, a mode of evaluation outside the domain of argumentation as a mode of learning, knowing, and expression. The "right"/"wrong" binary lacks awareness and appreciation of multiple perspectives and forfeits scrutiny of assumptions that would cast doubt on the tidiness of unilateral thinking. Assumptions underlie all perspectives and guide a subject towards particular types of information, methods, conclusions, and recommendations. From this vantage point, "right" and "wrong" refect the perspective adopted by those developing questions, answers, and curricula more generally to the exclusion of other perspectives, without attention to alternative conceptualizations, their context and signifcance. Herein lies a principal source of bias in the new pedagogy.

Blockchain, as an emergent arm of edtech, may be increasingly salient in traditional colleges and universities to permit students to transfer credits between CBET and non-CBET programs (Burnette, 2016, p. 90). With an eye to the future, the edtech vision is to enable the burgeoning non-traditional student population to enroll in courses in institutions around the world, documenting and transferring course credentials or ELOs with ease through blockchain while "professors" take on the new role of advising students in customizing their inter-institutional, international curricula (Williams, 2019).

Beyond new universities committed to a tech-mediated CBET and an expanding privatized edtech sector, big-tech frms themselves are expanding into education. For example, students can now earn certifcates from Google in just 3 to 6 months at the low cost of \$49 a course; to affrm the credibility of the program, Google has

<sup>11</sup>Some traditional universities in the United States already have dismantled the tenure system, replacing it with fxed-term contracts, while other traditional institutions have extended the outsourcing of selected courses to lecturers to a system that incorporates a new class of non-tenuretrack instructors on fxed-term contracts with a salary ceiling.

indicated that the certifcates substitute for regular college/university degrees for eligibility for jobs at their own company (Trapulionis, 2020). Big tech also has become an important component of edtech philanthropy.12 These frms' considerable support of the automated, personalization model of education is self-serving insofar as they are invested in the proftability of innovations, and crucially, the data collected from students, the "oil" of datafed education in the new millennium. Edtech and big-tech companies adopting edtech practices are fast becoming the new agents of knowledge production.

Currently, all educational institutions,13 traditional and non-traditional alike, are developing *learning analytics*, whereby student information from platforms as well as applications are mined and datafed. The purpose is to profle students so that "problem students" can be identifed early to permit "intervention", a structural mimicking of predictive profling of minoritized populations at a societal scale, specifcally in the education sector of the surveillance economy (Zuboff, 2019), without regard for the systemic biases that contribute to profling (Benjamin, 2019; Eubanks, 2017; Jefferson, 2020; Noble, 2018). Learning analytics in cash-strapped traditional colleges and universities unload the costs of development and new releases of software to vendors (Burnette, 2016, p. 90) while ostensibly helping to stem attrition, and do so by eroding students' privacy without their consent.

More generally, learning analytics is emblematic of the use of big data in the education sector. As in big tech's governance of populations generally, analytical use of AI in the education sector depends on big data pooled from populations rather than samples, and proceeds based on correlations among data that have been decontextualized (Bolin & Schwartz, 2015). Rather than focusing on causes of problems, learning analytics is based on correlations of patterns in the past to preempt problematic practices in the future through intervention in the present (Witzenberger & Gulson, 2021). The value of students in this system is that they are the source of data; per critical philosopher Gilles Deleuze (1992), they are "dividuals"—sets of data points subject to manipulation by machine learning—as opposed to *in*dividuals with agency whose actions are situated and require contextualization. Although learning analytics is considered valuable for its discovery of patterns (Beer, 2019), clustering techniques in learning analytics assign "dividuals" to groups not on the

<sup>12</sup>Other philanthropic support comes from nonprofts such as the Carnegie Corporation, the Michael and Susan Dell Foundation, and Achieve, and from private foundations such as McArthur and Barr. Overwhelmingly, edtech's philanthropic support emanates from private-sector gatekeepers of big tech, notably the Bill and Melinda Gates Foundation, the Chan Zuckerberg Initiative, the Google Foundation, and the Hewlett Foundation (Regan & Steeves, 2019). In addition, global venture capital investment in edtech increased to \$7 billion in 2019 from \$.05 billion in 2010 (Southwick, 2020). Even traditional colleges and universities have become absorbed into the business of education, often hiring administrators who have business experience but lack higher academic degrees (Mirrlees & Alvi, 2020).

<sup>13</sup>As Deborah Lupton and Ben Williamson (2017) have pointed out, individuals or "dividuals" are subject to dataveillance, analysis, and commercialization of personal data from the time one is a fetus and continues thereafter.

basis of discovery, but rather based on mathematical construction using predetermined parameters and criteria (Perrotta & Williamson, 2018). Observing market-ready innovations at an edtech trade show targeted to educational institutions, critical education scholars Kevin Witzenberger and Kalervo Gulson (2021), for example, observed the use of patterns of student mouse movements and response times to questions as the basis for the modelling of learning pathways. This "innovation" evaluates and purportedly preempts problems based on patterns outside the scope of assigned tasks, without students' awareness that mouse movements or response times will affect their learning pathway.14 Learning analytics is extending into the realm of emotions with the use of psychometrics, sentiment analysis, natural language processing, face cams and other modes of biometric dataveillance (Lupton & Williamson, 2017). Far from an ivory tower, the education sector is frmly embedded within the broader digital economy.

## **History of the Pedagogical Present: Contextual Dynamics in the Twentieth Century and Contradictions of CBET Wellsprings**

Even insightful critical scholarship on digital-era education has focused on the technologies that mediate education (e.g., Mirrlees & Alvi, 2020; Williamson, 2017), and those that focus on the accompanying pedagogy presume that it is new and has been developed to implement the emergent edtech industry. Indeed, business and innovation scholar Clayton Christensen and his colleagues (2008) presciently recognized the big-business aspect of the new edtech industry just before the end of the frst decade of the new millennium. They argued the edtech industry represents a case of "disruptive innovation," and that the computer-driven technological infrastructure for education would prompt a change in pedagogy that would change education as-we-know-it, decidedly for the better. However, *the so-called "new" pedagogy has a history that would have predicted considerable dissatisfaction*; the pedagogy, and its ills, *preceded* the technology.

Competency-based education (CBE) emerged in the United States in the late 1950s emphasizing ELOs and quantifcation; the inclusion of "training" (CBET) refects the vocational orientation that became salient in the 1960s, when the personalization tenet was introduced, and has remained central through the present. The impetus for the development of a new approach to education was a sense of the United States falling behind when the former Soviet Union launched Sputnik I in 1957, causing concern regarding the competitiveness of the skill base of the US citizenry (Elam, 1971; Hodge, 2007; Tuxworth, 1989). Enacted the following year, the

<sup>14</sup>Recording response times would seem to contradict the self-pacing imperative that is central to online learning.

National Defense Education Act brought education into the purview of federal policy and provided funding for education, notably in STEM felds and languages. However, demands changed in the next decade, the civil rights era.

The frame of the new approach to education changed in the 1960s to assist marginalized populations, especially Black Americans, who had "slipped through the cracks" of US post-War prosperity. The government extended funding beyond STEM to all felds and focused on teacher training and vocational programs outside traditional educational institutions to provide "disadvantaged" populations—a euphemism for "underserved"—with skills for jobs. Whereas the agenda behind skills-based education directly following Sputnik emphasized STEM to achieve competitive advantage internationally in what became the space race, the unfolding of CBET in the next decade reoriented the skills imperative to a pipeline to jobs for "non-traditional" students in racialized society.

The liberal agenda of the 1960s therefore was to institute a skills-based vocational approach to education to support diversity and ensure equity and inclusion in the US opportunity structure (James, 2019). The emphasis on skills required a pedagogy focused on student performance, a problem directly amenable to the establishment of ELOs, with inspiration in educational theory from Benjamin Bloom's (1956) taxonomy of educational objectives, published just 1 year prior to Sputnik. The rollout of the new pedagogy entailed specifcation of multiple profciencies associated with each ELO to permit quantitative evaluation and ensure objectivity in the new science of education to establish confdence in the order of the system (Kerka, 1998). Competence in profciencies would demonstrate mastery of ELOs and preparedness for jobs. A little more than 10 years after the publication of his taxonomy of educational objectives, Bloom (1968) incorporated the principle of student-centered learning via self-pacing in his framework, accommodating the agenda of diversity of the civil rights era and crystallizing the imbrication of personalization with ELOs and quantitative assessment. While contextual dynamics prompted a change from targeting the general population for skill development for purposes of international competition to targeting unemployed minoritized population for skill development for jobs, academic infuences contributed to the pedagogic principles that were to guide the liberal process.

Eclectic and selective intellectual wellsprings refect inconsistencies that arguably produce problems while also helping to explain the multiple versions of CBET (Kerka, 1998) that developed within and across different types of educational institutions in the twenty-frst century (Klein-Collins, 2012), as discussed in the previous section. A pivotal intellectual wellspring for CBET was the scholarship of experimental and behavioral psychologist Burrhus Frederic Skinner (1968), who pioneered the quantitative, "scientifc" examination of animal behavior, which he maintained is similar to that of human beings and therefore useful in the management of people's behavior. He was interested in shaping animals' behavior by narrowing and reinforcing a prescribed set of desired behaviors, analogous to the pre-determination of learning outcomes set by teachers for learners in CBET. Also pertinent to CBET's exclusive focus on performance, Skinner's (1968) approach casts anything that cannot be observed directly as irrelevant, a basic tenet of positivist science.

The scientifc mode of analysis in the social sciences, education, and various felds across academe developed in an emergent socio-technical milieu buttressed by the introduction of computers and their widespread use in academe and think tanks, encompassing wide-ranging developments from Ludwig von Bertalanffy's (1968) systems theory to a quantitative revolution in methods across many academic felds. As the education sector became responsibilized for its accountability (Houston, 1974), the systematization of data permitted quantitative assessment of students' performances on profciencies and mastery of ELOs, as well as the quantitative assessment of whole curricula. Quantifcation presented the pedagogy as legitimate by the presumed neutrality and objectivity of a "scientifc" approach to assessment. During the '60s and '70s and throughout most of the twentieth century, CBET was implemented in non-tenure-track educational institutions associated with what became known as the Performance-Based Teacher Education Movement (PBTM), amenable to quantitative assessment (Hodge, 2007; Gallagher, 2014). Yet, *dropout rates from CBET programs were high* (Grant, 1979; Jackson, 1994),15 anticipating the current situation of MOOCs. Despite this fundamental problem, the movement eventually spread by the 1990s internationally to Canada, the UK, continental western Europe, Australia, and Africa, and topically extended to professional felds such as medicine, health, and IT (Lassnigg, 2017). The fervor regarding quantifcation via the pedagogical innovations of ELOs and personalization apparently outweighed signs that the personalization of CBET was insuffcient to deal with the problems of diversity to which the pedagogy purportedly responded.

The intellectual activity in the '60s connected with another, familiar wellspring: Taylorism, which has been a pervasive infuence in societal trends from the early twentieth century through the present. Named after Frederick Taylor (1911) who published *The Principles of Scientifc Management* in 1911, Taylorism implicitly framed CBET in two ways. First, Taylorism embraces effciency by way of developing a detailed division of labor so that each individual becomes profcient in specifc jobs. Analogously, CBET embraces a detailed division ("taxonomy", per Bloom) of ELOs and associated profciencies that are amenable to "scientifc" analysis, which is useful as a quantitative vehicle for accountability. Second, Taylorism casts rankand-fle workers as doers, not thinkers, a category reserved only for managers who conceptualize the activities in which workers perform their duties. Analogously, learners in a CBET system thereby are conceptualized as doers while the instructors are the thinkers who design and prescribe pre-determined behavioral outcomes, the ELOs.

<sup>15</sup>Both the references I cite comment on the high dropout rates, but do not provide data, and it has proven impossible to fnd such data. About half a century after this period, I surmise that the drive to ensure "accountability" was limited to analysis of student performance on ELOs, and simply stated, the high dropout rates were known generally but not reported.

Although familiar Taylorist principles seem consistent with CBET principles developed in the context of the quantitative revolution as well as behaviorism and liberal approaches to diversity, the mix of ideas associated with CBET lack coherence. For example, the granularity of Taylorist divisions of labor and their manifestation in CBET in terms of ELOs and profciencies are inconsistent with the holism of systems theory. One conceivably might argue that the two frameworks nicely complement each other, but the underlying principles nonetheless differ. Whereas from a systems perspective, a change in one component of a system affects all others, profciencies and ELOs do not necessarily interrelate unless a specifc profciency directly speaks to such interrelation. The skills-based knowledges for which CBET aims lack a relational understanding of problems and construct compartmentalized logics that can miss problems formed at their nexus.

Another contradiction lies in the evolving discourse of personalization, which champions student-centered learning. Students indeed have control over the speed with which they complete tasks, but they have no voice regarding the domain of tasks to complete, or at the least, an avenue of negotiation. The practices by which the personalization tenet of CBET materialize contradict humanist values of scholars such as John Dewey (1971), from whom CBET also purportedly draws, partially. Dewey was interested in activity-based learning, suggestive of CBET's emphasis on skills-based education, and this interest connected with knowledgebased education. Ironically, CBET scholars tended to focus on the former and circumvented the latter (see Wexler 2019), reinforcing the notion of the Taylorist division between doers and thinkers and rendering the lack of student control over knowledges problematic. Similarly, CBET scholars emphasized linguist Noam Chomsky's (1965) distinction between doing and knowing while, however, bypassing Chomsky's thoughts about the importance of knowledges, a centerpiece of his critique of Skinner's devaluation of innate knowledges (Hodge, Mavin, & Kearns, 2020). Following the behaviorism of Skinner, CBET presumes that knowledges follow from skills. Yet evidence exists that affrms the opposite, namely that knowledges prefgure skill acquisition. For example, a study comparing the performance of two groups of children – one of which had developed contextual knowledges regarding a topic on which they were tested and the other of which had not – showed that the group with contextual knowledges tested better than the other group (Wexler, 2019, p. 30). Another study showed that children at resource-poor schools lack the texts available in affuent school districts that feature material on standardized exams (Broussard, 2018, p. 53). Context matters regarding both the knowledges that enable relational, critical analysis and the accounting of uneven performance.

The growth of CBET throughout the second half of the twentieth century and its diffusion around the world is ironic considering the problems. In addition to issues regarding circumvention of contextual knowledges and the high dropout rates from CBET programs, proponents of the pedagogy were unable to provide evidence that it results in better performance than other pedagogies (Gallagher, 2014; Hodge & Harris, 2012; Kerka, 1998; Tuxworth, 1989). Moreover, despite the vocational orientation to provide an education-to-jobs pipeline, the CBET community stopped

short of any communication with employers (Burnette, 2016, p. 90; Henrich, 2016). CBET was out touch with new developments downstream in the workplaces for which it purportedly was preparing students. In contrast to the narrow focus on specifc tasks in a Taylorist-inspired rigid division of labor connecting with CBET, post-Fordist production processes by the 1980s in the United States, especially in the automobile industry, mimicked Japanese competitive strategies regarding quality control, which required holistic, contextual knowledges through job rotation. Accordingly, the Japanese had to train US workers in their branch plants in the United States and located facilities in "greenfeld" sites—rural areas without a history of manufacturing—to avoid teaching workers to unlearn Taylorist practices (Ettlinger & Patton 1996). The capacity of CBET students to tackle new, multidimensional problems in workplaces remained "a next step" (Hyland, 1997), and continues to be elusive in new and different ways in the digital era.

Although the theory of disruptive innovation predicted that pedagogy follows from new technology and thereby missed the historicization of new trends, its departure from an emphasis on breakthrough innovations by its focus on the tweaking and rendering of existing products or services accessible to those formerly overlooked as a market, often due to lack of affordability, is apt. The expansive notion of disruption relatively accurately, even if partially, describes market changes specifcally regarding pedagogy. Digital technology enabled the scaling of a pedagogy that emerged in the twentieth century for a small market, which represented, however, a downsizing of the original, societal-wide target population. It was the confuence of existing pedagogy and new technologies to scale up its delivery, not a causal or chronological relation between the two, that constitutes the current disruption. Causal factors are contextual, not a matter of technology proactively being pushed on a market to engage profound societal problems. A fundamental problem with the theory of disruptive innovation applied to knowledge production is that at its core, it is technocratic in its presumption that technology can engender a mode of knowing capable of serious engagement with societal needs.

History shows us that the present is produced over time, discontinuously. The discontinuous and contingent nature of CBET's evolution is refected in changes in its target populations and its disparate intellectual wellsprings that spawned various renditions of the pedagogy in different types of institutions. The "production of the present" is evident in the profusion of problems associated with CBET principles as well as inconsistency among principles and lack of follow through to connect education with jobs—all of which were evident in the twentieth century and unsurprisingly remain so. It would have helped if proponents of the so-called new pedagogy in the new millennium would have contextualized the principles they promulgate to learn from history. Importantly, beyond problems that result in student attrition and lack of connection between educational institutions and employers, the inattention to relational and contextual thinking in CBET raises important questions about ethics and responsibilities, as elaborated below.

#### **Downstream Consequences of Tech-Mediated CBET**

While edtech renders students valuable as "dividuals" to a variety of actors and notably to frms, the accompanying pedagogy renders students valuable downstream as workers, also notably to frms. The corporate, neoliberal sense of value envelops and pervades all aspects of education in the twenty-frst century. Related critical discussions of neoliberal education have focused on its privatization;16 the promotion of diversity in universities for the sake of competitive advantage; the training of students for lifelong learning so they can adapt to changing workplaces; and the cultivation of overwork (Cockayne, 2020; Mitchell, 2018). The CBET pedagogy, and more informally, the skills orientation in technical and professional felds, have ushered in novel ways to inculcate neoliberal and technocratic values that play out downstream in workplaces and everyday life. The personalization component of CBET responsibilizes students for their progress while an ELO repertoire of skills licenses students for jobs, without, however, the contextual and conceptual knowledges that would permit critical questioning. Even if traditional universities and colleges only recently have begun to adopt the ELO system, many disciplines, notably technically oriented STEM felds and business and other professional felds the felds in which CBET developed in the twentieth century in non-tenure educational institutions—have long approached education principally from a skills vantage point. Formalization of ELOs reinforce existing tendencies that materialize in new curricula, with consequences downstream.

Although jobs in the data sciences require considerable critical thinking regarding, for example, statistics and engineering, they have no requirements for knowledges of the places or people applications affect. Contextual issues and related knowledges are outside the data-science domain, explaining why AI researcher Hannah Kerner (2020) has argued that data scientists are "out of touch," in part due the prioritization of novel methods and relative disrespect for research on applications to pressing real-world problems. Kerner pointed out that AI researchers compete based on contrived benchmarks that embed biases or pursue modelling with inappropriate categories that lack connection with complex dynamics in the real world. Media scholar Sophie Bishop's (2020) ethnography of algorithmic experts associated with YouTube industries showed that these practitioners routinely ignored issues such as socio-economic inequalities inherent in social media platforms. Human-computer interaction scholar Kenneth Holstein et al. (2019) found in an interview-based study of data-science practitioners that "fairness," apparently a proxy for "ethics" in data-science workplaces, is something one does on their own time. A report drawing from data-science practitioners worldwide showed that only 15% of respondents indicated their organizations dealt with fairness issues (Anaconda, 2020, p. 32).

<sup>16</sup>Almost one-third of the world's population is now privately educated (Levy, 2018).

The lack of concern for effects of applications of AI research derives from the reward system. The private sector, notably big tech, dominates as the major employer of AI researchers and funds most AI research (Knight, 2020). The main priority, therefore, is proft. As one data scientist commented, "I like to view myself as a problem solver, where data is my language, data science is my toolkit, and business results are my guiding force" (Peters, 2018). Similarly, as sociologist and critical media scholar David Beer (2019) showed in his interview-based study of the data analytics industry, data analysts strive for "… the pursuit of effciency and the location of value" (p. 129). Consistent with the proft motive, a survey and interviewbased study of frms engaged in data analytics and AI across wide-ranging industries found that a salient motive for engaging "ethics" is self-promotion by establishing trustworthiness in the reputation economy to further business interests (Hirsch et al., 2020). The study found that ethics often are interpreted as a privacy issue, which certainly requires attention but hardly encompasses the wide-ranging effects of applications. None the motives uncovered by researchers prioritize effects of decision-making on people and places outside a frm. The crystallization of the skills-focused CBET pedagogy upstream reinforces rather than alters the technocratic and neoliberal values that infuse data-science workplaces, a perilous prospect in the context of deepening socio-economic polarization and confict worldwide.

Problems in the domain of the data sciences "leak" to other domains. Sociologists Will Orr and Jenny Davis (2020) found that agents of the data sciences unload ethical issues to corporate *users*. As one of their AI-practitioner interviewees remarked,

We were a technology provider, so we didn't make those decisions… . It is the same as someone who builds guns for a living. You provide the gun to the guy who shoots it and kills someone in the army, but you just did your job and you made the tool. (cited in Orr & Davis, 2020, p. 12)

Lack of training in contextual and related knowledges among corporate users in turn clarifes why critical questioning among users of data-science products is rare. Further, Orr and Davis found that each of their 21 interviewees had limited awareness of the broader system in which they worked. Beyond the fundamental tie to proftability, a serious impediment to productive and ethical engagement with applications and their effects is the Taylorist division of labor in work, refecting a mode of working and learning that is inculcated upstream and grounded downstream. The division of labor within frms, and more generally the ecosystem of frms, renders everyone disengaged from the linkages among tasks fulflled by different people and groups, despite the technocratic discourse of seamless fows. Orr and Davis' (2020) study revealed a pattern of "ethical dispersion" in which "… powerful bodies set the parameters, practitioners translate these parameters into tangible hardware and software, and then relinquish control to users and machines, which together foster myriad and unknowable outcomes" (p. 7). Beer (2019, p. 129) similarly found that "the data gaze" is a conceptualization of the world from the vantage point of isolated constituent parts from which the whole is retroftted.

The brave new world of education portends a world ironically insensitive to issues of difference—the initial prompt for CBET developments in the 1960s—and is unable to engage digital subjects upstream and downstream in problems of social and data injustice that affect us all. The direct effects on marginalized populations are clear while the insulation of white privilege has obscured problems that are erupting in protests worldwide. A crucial lesson of the Covid-19 pandemic is a nasty paradox: the apparently "rich" United States has plenty of vaccines while so many other countries suffer, yet people travel internationally and carry the virus with them while the deep but unattended inequalities within the United States have contributed to signifcant numbers of people refusing vaccines, with consequences, even if uneven, for everyone. Myopia towards longstanding societal wounds can be a matter of life and death, yet the science of the digital era has yet to even attempt to grapple with this pressing reality. As computer scientist Barbara Grosz commented in an interview in regard to the ethical problems facing the data sciences, "… it's not a question of just what system we can build, but what system we should build. As technologists, we have a choice about that, even in a capitalist system that will buy anything that saves money" (cited in Ford, 2018, p. 349).

#### **Conclusion**

The datafcation of knowledge in the twenty-frst century version of CBET, currently unfolding through the edtech industry, inculcates technocratic thinking that prepares students upstream in the neoliberal academy for work downstream that lacks critical, contextual thinking, and accordingly, produces working subjects unlikely to question the parameters of work assignments. The relation between upstream learning and downstream practices is, however, one of conditioning but not determinism because there always is the possibility that digital subjects will refect critically on what they know, how they know it, the ways in which their knowledges have been constructed and governed, and how they might think and conceivably act differently (Foucault, 2000). Yet such deep and possibly diffcult thinking can be a tall order when so many digital subjects are pressed for time, often in the context of multiple jobs, or otherwise concerned with the requirements of maintaining a job. Resistance to norms always exist, yet often in shadows of a dominant regime.

Although education conditions knowledges, recognizing alternative scenarios, it is not unicausal. Traditional, tenure-track postsecondary colleges and universities in the late twentieth century, for example, did not implement CBET, suggesting other problems such as the construction of postsecondary education by and for the relatively privileged—another factor at work in producing limited frames of reference with negative effects downstream as societal inequalities deepened following civil rights legislation. Lack of diversity coupled with CBET pedagogy in the new millennium help explain how well-meaning and intelligent actors can lack critical awareness of the contexts their actions affect and the relation between individual tasks and broad societal problems.

If education is to guide us to a better world, then the "new" pedagogy is cause for serious concern when the world is at a tipping point of tensions wrought of profound inequalities. Admittedly, conditions vary across space. For example, countries with a clear welfare state where education through the postsecondary level is free and subsidized by government lack the pressures indicated in this chapter for the continual boosting of revenue in educational institutions that fuels strategies prioritizing proftability. Yet the "welfare state" is an idealized model, and already, notably in western Europe, many nation-states increasingly lack the capacity to provide basic needs for all subjects, especially in the context of mushrooming streams of international migration among economic, political, and environmental refugees. Processes of disintegration of the welfare state are uneven across space relative to context-specifc conditions, but they appear inexorable in light of deepening socio-economic polarization worldwide.

Some of the problems of the CBET pedagogy, notably ineffective engagement with issues of difference, are unsurprising, precisely considering the failure of CBET in the previous century in the United States to engage these issues. Upstream efforts to correct algorithmic violence to places and people often register in the insertion of a course in ethics in data-science curricula, commonly conceptualized in terms of philosophy. Yet ethics-as-philosophy does little to inform data scientistsin-training about contextual issues, the focus of critical social science. Ethics matter, but without contextual knowledges, they remain an abstraction. Interdisciplinary curricula are pivotal to responsible downstream practices, with the qualifcation that they encompass more than skill sets delivered through ELOs, specifcally, critical contextual, content-oriented knowledges to enable connection between intellectual constructs and lived experience. Indeed, one corner of education theory, apparently jettisoned in the pursuit of prescribed outcomes, is the theory of "situated learning" (Lave & Wenger, 1991), which interestingly became adopted in a corner of innovation theory centered of "communities of practice" (Wenger, 1998; Wenger, McDermott, & Snyder, 2002), and broadly has parallels in feminist theory regarding "situated knowledges" (Haraway 1988). As feminist and critical data studies scholars Catherine D'Ignazio and Lauren Klein (2020) have argued, feminist principles that value situated knowledges as well as difference, multiple perspectives, and intersectionality are germane to a constructive data science.

Crucially, a critical, interdisciplinary understanding of data studies requires attention well beyond data-science disciplines. All students across all felds, including the humanities, social sciences, arts, business, law, and health should be exposed to problematic and often devastating uneven realities of algorithmic life within the education sector and more broadly. Beyond revealing the fruits as well as problems of societal projects, education should teach us all about our real or potential implicit complicity in the perpetuation of inequalities by virtue of lack of critique, silence, and unwitting collaboration on everyday violences. A proactive sense of citizenship committed to social, environmental, as well as data justice requires urgent attention in all domains of life, including the upstream production of knowledges and their downstream applications.

#### **References**


Massey, D. (2005). *For space*. London: Sage.


**Nancy Ettlinger** is a critical human geographer with interests in digital life, the unevenness of neoliberal and algorithmic governance relative to entrenched societal hierarchies, the politically charged nature of knowledge, social justice and hopeful possibilities for constructive change emanating from civil society. Her current work engages the ways in which the digital infrastructure enables undemocratic events and processes and possibly political regime change. She is author of *Algorithms and the Assault on Critical Thought: Digitalized Dilemmas of Automated Governance and Communitarian Practice* (Routledge, 2023), and publications in journals such as *Big Data & Society*; *Foucault Studies*; *New Left Review*; *Antipode; Work Organisation, Labour & Globalisation*; *Political Geography*; *New Formations; Progress in Human Geography*; *Cambridge Journal of Regions, Economy & Society*; *Annals of the American Association of Geographers*; *Geoforum*; *Environmen*t *& Planning A: Society and Economy*; *International Journal of Urban & Regional Research*; *Journal of Economic Geography*; *Feminist Economics*; *Human Geography*; and *Alternatives: Global, Local, Political*.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Part II Spaces of Digital Entrepreneurship, Labor, and Civic Engagement**

## **Chapter 6 Europe's Scaleup Geography and the Role of Access to Talent**

**Zoltán Cséfalvay**

## **The Entrepreneurial Ecosystem: From a Metaphor to an Analytical Approach**

Although startups play a vital role in innovation and have become one of the main drivers of current industrial revolution, researchers know little about their geographical distribution. This, however, largely contradicts the fact that use of today's buzzword "entrepreneurial ecosystem" (Brown & Mason, 2017; Brown & Mawson, 2019)—a term conceived by Moore (1993) primarily as a metaphor rather than as a research and policy concept—aims to place the entrepreneur center stage (Acs, Stam, Audretsch, & O'Connor, 2017; Audretsch, Cunningham, Kuratko, Lehmann, & Menter, 2019). More precisely, those using this approach largely focus on analyzing and supporting that institutional environment which supports and enables the creation of new frms and businesses (De Meyer & Williamson, 2020; Isenberg, 2010; Spigel, 2017).

The frst drawback is that the list of such institutional actors is exceedingly long, including, for example, universities, research institutes, technology centers, large multinational, nonproft organizations, incubators, accelerators, business organizations, banks, venture capital, angel investors, and governmental organizations. Yet skilled labor and talents are also a prerequisite, and cultural factors, including success stories and societal norms, may play a major role. From this broad spectrum, scholars have made numerous references to the pivotal role of universities (Heaton, Siegel, & Teece, 2019) and in particular to knowledge transfer and spillover through university spinoffs (Graham, 2014; Grimaldi, Kenney, Siegel, & Wright, 2011; Scott, 2002). The publicly funded and operated R&D and innovation agencies—such as the Defense Advanced Research Projects Agency (DARPA) in the US, the Finnish Funding Agency for Technology and Innovation (TEKES) in Finland, or the European Institute

Z. Cséfalvay (\*)

Centre for Next Technological Futures, Mathias Corvinus Collegium, Budapest, Hungary e-mail: csefalvay.zoltan@mcc.hu

<sup>©</sup> The Author(s) 2024 107

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_6

of Innovation and Technology (EIT) in Europe—form another widely studied area, not least because they are embedded in the idea of public-private partnership for innovation (Block, 2008; Mazzucato, 2013). Still, in both cases, policymakers face the same challenge: How can they foster new technological breakthroughs by promoting and subsidizing those young companies and startups that are not yet even born? Kay (2011) puts it succinctly*:* "[I]f an industry is to advance, much—perhaps all—innovation will come from businesses that don't yet exist" (pp. 9–10).

A further and more serious drawback is that when applying the ecosystem metaphor to innovative technology regions with a vibrant entrepreneurial culture and critical mass of startups, a fundamental contradiction arises. The word "ecosystem" is associated with stability, resilience, and organic development, whereas innovative entrepreneurs and especially startups in the sense of Schumpeter's creative destruction hew more closely to the disruptive character of innovation. Researchers have shown that startups are those companies from which one expects technological innovations and innovative products, whereas large corporations usually incrementally and continuously develop their products with systematic R&D work (Baumol, Litan, & Schramm, 2007; Christensen, 1997; Tirole, 2017). Yet these Schumpeterian dynamics and the ceaseless struggle between the incumbent companies with old technologies and the frontier frms with new technologies (Aghion, Antonin, & Bunel, 2021; Phelps, 2013), between breakthrough innovation through startups and incremental innovation driven by incumbent companies, hardly ft the ecosystem metaphor (De Meyer & Williamson, 2020; Fransman, 2018).

#### *The Battle of Narratives*

Although the term "ecosystem" in relation to entrepreneurs, startups, and innovation stretches back to the mid-2000s, this strongly policy-oriented application of the concept is deeply rooted in the regional sciences of previous decades. As early as the 1990s, Storper (1997) outlined the "Holy Trinity" of regional economic development as the three fundamental elements of technology, institutions, and the region with its features jointly interacting to infuence the economic development of a given region. A decade later, Etzkowitz (2008) introduced the *"*Triple Helix" as an analytical framework for innovation clusters, according to which innovation can be found where the activities of the three principal actors of university sphere, industry, and state intersect, or where their border areas mutually overlap, and hybrid organizations are created (Ranga & Etzkowitz, 2013).

Today's policy thinking on promoting innovation revolves around the concept of *open innovation*, whose advocates emphasize the cocreation of knowledge in multiplayer networks (Andersen, de Silva, & Levy, 2013; Hutton, 2015). This is the result of a decades-long journey in four distinguishable phases. They are clearly recognizable by the frequencies with which the related terms are mentioned in the literature, taken from Google Books Ngram Viewer that records a great bulk of books (see Fig. 6.1).

**Fig. 6.1** Frequencies of terms for the different policies promoting innovation in printed sources, 1960–2019 (frequencies in %). Source: Data retrieved December 29, 2021, from Google Books Ngram Viewer (See https://books.google.com/ngrams/). Design by author

The 1950s and the 1960s were the decades of *science policy*, characterized by the hope that the results of state-fnanced basic research would spread to the economy almost automatically. In the 1970s and the 1980s, *technology policy* received more attention and the state no longer heavily subsidized research projects, but rather some technologies in a broader sense. Then came the decades of *innovation policy,* in the 1990s and 2000s, with the catchphrase "knowledge transfer," and thus the basic problem arose of fnding institutions and organizations that could facilitate the fow of technological knowledge towards economic actors.

To speak of the new stream of open innovation since the early 2010s is to acknowledge the simple fact that no organization developing a new product or technology today can claim that they know everything and have no need for the knowledge of others. In a stricter, corporate management sense, as Chesbrough (2003) frst used the concept, companies open up their boundaries to their environment in R&D. In a broader context, today new product and new technology is developed in a dense network of interactions, sharing, cocreating, and cofunding of numerous actors, including of course the state, the business sphere, the universities, the small startups and large corporates, the fnancial sector, and the supporting institutions, whereas the Internet and unlimited global and mobile accessibility enable almost everybody to join the network of open innovation.

As for other theoretical roots, researchers of *industrial clusters* have found a direct route into the concept of the entrepreneurial ecosystem (Spigel & Harrison, 2018). This goes back to the 1990s and Porter's seminal paper (1998), in which he underscored the prominent role of geographical concertation of interconnected companies and frms in related industries and associated organizations (i.e., universities and research institutes, fnancial intermediaries, and business-related services). At the same time, case studies proliferated on technology-based industrial clusters, such as Silicon Valley and Boston's Route 128 (Kenney, 2000; Lee, Miller, Hancock, & Rowen, 2000; Saxenian, 1996), and on technopoles and sciences cities (Castells & Hall, 1994), such as the Research Triangle in North Carolina, Tsukuba and Kumamoto in Japan, the "Silicon Fen" around Cambridge in the UK (Koepp, 2002), and Sophia Antipolis in France.

Further roots lie in the concept of the *industrial district* that emerged in the 1990s. The reasoning behind this was that small companies—if they embed themselves in regional networks while cooperating and competing with each other—can be very innovative and make the entire region more competitive (Pyke & Sengenberger, 1992). A prime example is the so-called "Third Italy," which refers to the north-east and central parts of the country, in which numerous industrial districts have developed through locally embedded collaborations of small and medium-sized companies mainly specializing in craft-based manufacturing (Pyke, Becattini, & Sengenberger, 1990). Nevertheless, here again success is rooted in many factors, such as the localized knowledge production and spillover trough interaction between the frms (Maskell & Malmberg, 1999), their networks (Camagni, 1991) and their fexible specialization (Piore & Sabel, 1984). Undoubtedly, all these ideas are indebted to a large extent to Marshall (1919) and his theory on industrial districts.

Looking back over the past few decades, the narratives surrounding regionally anchored cooperation and competition between different actors promoting new technologies, innovation, and new companies seem to have changed radically. With a simple glance at the frequencies with which the related terms are mentioned in the literature—taken from Google Books Ngram Viewer—one can clearly see that whereas the ecosystem concept has won the battle of narratives, the theory of industrial districts and clusters, which are more closely linked to a seemingly outdated industrial policy, has gradually lost its relevance (see Fig. 6.2).

**Fig. 6.2** Frequencies of terms for the different concepts for regionally embedded innovation in printed sources, 1990–2019 (frequencies in %). Source: Data retrieved December 29, 2021, from Google Books Ngram Viewer (See https://books.google.com/ngrams/). Design by author

There are many reasons for this shift, but most of them are related to changes in technology. On the one hand, today's almost ubiquitous digital technology makes the regionally embedded interconnectedness of different actors faster and cheaper than ever. Yet the same digital technology allows companies, especially startups, to scale and grow faster and more economically than ever before. In short, policies that encourage innovation and entrepreneurial ecosystem require relatively little investment but promise high returns. However, ample evidence exists that policymakers virtually everywhere in Europe are aiming to create their own Silicon Valley: "[T]aking on a name, and perhaps establishing some business incubators or building a few semiconductor frms, PC factories, or software houses, is not enough" (Lee et al., 2000, p. 3). Even if an extensive literature on ecosystems for innovation, entrepreneurs, and startups supports these efforts, in practice it is hardly possible to implement these concepts without market forces, the fesh-and-blood startup founders, and venture capitalists (Lerner, 2009).

#### *Why the Scaleups?*

In this study, I focus on startups in terms of innovation and entrepreneurial ecosystem, for multiple reasons. In a comparison between startups and big corporates, Tirole (2017, p. 443) rightly pointed out that today "innovation happens more and more in small entrepreneurial startups rather than in large companies." Corporate management is interested in safeguarding the market of their existing products, so why should they support intra-company development of those new products that would eventually eat up the market opportunities of the previous ones? Startups also have the upper hand against corporations with those innovations that require mainly intellectual capital and relatively low capital investment (where corporations will always hold the trump card). They also have advantages in areas with strong competition for users and consumers, where the market is not covered by a few large enterprises. They often win in felds where innovation does not require deep scientifc knowledge or expensively equipped laboratories, and as Phelps (2013) underscores, innovation is not the preserve of the elite—most of the time, innovation is not rocket science or high-tech.

However, digital technologies increase startups' chances enormously. Once again, a quick glimpse into the frequencies with which the terms "digital technologies" and "startups" are mentioned in the literature reveals that they have been going hand in hand and that their effects are mutual (see Fig. 6.3).

Digital technologies lower the barriers to market entry, and thus open up more opportunities for startups; vice versa, those startups are driving the development of digital technologies. In addition, digital technologies facilitate extraordinarily the combination of different business and technology felds, which is the very essence of innovation and thus of great potential beneft for startups.

Back in the middle of the last century, Schumpeter not only glorifed the entrepreneur as the engine of development, creating new products, new methods of

**Fig. 6.3** Frequencies of terms digital technologies, startups, and open innovation in printed sources, 1970–2019 (frequencies in %). Source: Data retrieved January 31, 2022, from Google Books Ngram Viewer (See https://books.google.com/ngrams/). Design by author

production, or new forms of industrial organization (Schumpeter, 1942/2003, p. 82), but he was also aware of what *new* means in most cases, as "innovation combines factors in a new way, or that it consists in carrying out new combinations" (Schumpeter, 1939/1989, p. 62). Today Ridley (2020, p. 250) formulates this insight more generally, emphasizing that every *innovation is recombinant*, and "every technology is a combination of other technologies, every idea is combination of other ideas," it is digital technology which makes these combinations easier, faster, and cheaper. As Brynjolfsson and McAfee (2014, p. 78) recognize, "the true work of innovation is not coming up with something big and new, but instead recombining things that already exist." By listing many well-known examples from Google's self-driving car to Facebook and Instagram, they conclude that "digital innovation is recombinant in its purest form" (Brynjolfsson & McAfee, 2014, p. 81).

Yet startups are not only benefting from this shift towards recombinant and open innovation—they are also taking advantage of the increasing role of intangibles in the modern economy, from software to intellectual property rights, from brand value to large databases. Haskel and Westlake (2017, 2022) underline that in our age when investment in intangible assets becomes increasingly important, a crucial property of intangibles, the synergy, has a critical impact on innovation. Because ideas and other ideas go well together, especially in technology, intangibles are often particularly valuable when properly combined with other intangibles. This is precisely what paves the way for startups, which are typically involved in the innovation process when knowledge and human capital are the assets to be leveraged. Another advantage of intangibles, especially in relation to digitized assets or platforms with network externalities, is that the companies relying heavily on them can grow exponentially and scale globally at unprecedented speed (Azhar, 2021). All of this combined is generating a winner-take-all frenzy, the rise of superstar frms (Aghion et al., 2021; Autor, Dorn, Katz, Patterson, & van Reenen, 2020), and a growing gap between the front-runners and laggards, as the latter are usually engaged in tangible economy.

The advantages startups hold in digital technologies and innovation are obvious and, in this study, I apply Graham's (2012) approach and use the term "startup" in line with my main information source, the Dealroom, whose authors defne a startup as "a company designed to growth fast" (Wijngaarde, 2021). This would allow one to avoid arbitrary thresholds for various metrics such as age, technology, funding structure, market value, or employment structure of frms. However, for deeper research, the problem arises that there is a skewed distribution of startups that somewhat follows the power law when one examines the relationship between the amount of funding and the number of startups, and the vast majority of startups tend to receive very minor funds or no funds at all (Cséfalvay, 2021). Similarly, only a tiny fraction of startups is responsible for the bulk of innovations and technological breakthroughs, whereas most are caught in the early stages of launching a new business with a marketable product.

For this reason, by analyzing the growth stages of young companies, Flamholtz and Randle (2015) distinguish the "organizational scaleups," which are those startups that have already received signifcant funding, developed a marketable product and viable business model, and therefore are able to grow quickly. For a startup to qualify as a scaleup, the various startup ecosystem ranking institutions (Dealroom & Sifted, 2021; Durban, 2021; Erasmus Centre for Entrepreneurship, 2021) set numerous criteria to be met, such as annual growth, number of employees, or annual turnover. Yet what they have in common is that scaleups are those startups that have already raised at least US\$1 million in funding.

Whereas Ries (2011, p. 27) famously defned a startup as "a structure designed to create a new product or service under conditions of extreme uncertainty," scaleups have already passed the stage of extreme uncertainty. To quote another oftencited defnition of Blank and Dorf (2020, p. xvii), who describe a startup as "a temporary organization designed to search for a repeatable and scalable business model," scaleups have already found their business model and have marketable products. In short, scaleups are successful startups that are economically relevant and have growth prospects and as such can make a signifcant contribution to the entrepreneurial ecosystem of a city or a region.

#### *Why the Cities?*

It is evident that policies targeting the ecosystem for innovation, entrepreneurship, and startups include increasingly place-based measures. The crucial question is, however, what kind of places are best suited today for establishing such an ecosystem, and, in particular, how to stimulate its dynamics (Bailey, Pitelis, & Tomlinson, 2018).

In this context, Florida (2017) stressed that the recent "urban shift of the hightech startup companies und talent is a real sea change" (p. 42). On the one hand, it was a long-awaited phenomenon and, on the other hand, a contradiction to the period from the 1970s to the turn of the millennium, when high-tech industries, venture capital investment, and startups moved to the edges of suburbs like Silicon Valley or Boston's Route 128. How, however, apart from a few previously established corporate campuses of today's digital giants, the startups are leaving—as Kotkin (2000) puts it—the "Nerdistan," the sprawling, car-oriented suburban periphery with offce parks, for the vibrant and dense cities with creative milieu. Whereas the venture capital investment and venture capital-backed startups of the 1980s and 1990s clustered around the fringe of suburban areas, today it is the city that is becoming a booming "startup machine" (Florida, Adler, King, & Mellander, 2020).

Cities have always been the centers of knowledge production and transfer, so they offer an almost natural ft for startups. What is new is their comeback, and the drivers beyond are again increasingly technological. Since the beginning of the last decade, society has been experiencing the Fourth Industrial Revolution. with new technologies such as artifcial intelligence, big data analytics, blockchain, biotechnology, and nanotechnology (Schwab, 2016); with new means of production such as digitization, robotization and automation; and with the overarching economic shift from tangible to intangible assets and investments. One of the common denominators of these technologies is that they are less geared towards hardware and more towards software and intangibles, and thus do not necessarily require large offce spaces or manufacturing capacities, the easy and cheap availability of which once fueled the rise of the suburban periphery. Consequently, startups are now moving from the suburban areas to the cities to beneft from the dense network and cluster of universities, research institutes, venture capital funds, high-tech services, and the creative milieu. As Florida and Mellander (2016) summarize this shift: "[T]he suburban model might have been a historical aberration, and innovation, creativity, and entrepreneurship are realigning in the same urban centers that traditionally fostered them" (p. 14).

#### **Research Questions, Data, and Methodology**

As a backdrop for this brief overview of startups, scaleups, and the entrepreneurial ecosystem, let me lay out the two objectives of my study.

The frst is to analyze the European scaleup landscape in terms of municipal performances and to look in detail at the territorial distribution of scaleups across the European cities. Examining the well-known startup ecosystem rankings (Dealroom & Sifted, 2021; Erasmus Centre for Entrepreneurship, 2021; Startup Genome, 2021; StartupBlink, 2021), one can conclude that a few large cities dominate the landscape. However, my aim is to include every European city with considerable scaleup performances in order to provide a deeper insight into the geographic pattern.

The second objective is to investigate how access to locally available talent affects this landscape. Does it reinforce the trend to concentration, or does it even

weaken this tendency? Do the cities with good access to talent have a chance to compete with the big scaleup cities? Or, conversely, does poor access to talent pose an obstacle for scaleup cities to strengthen their position in the European scaleup city landscape?

My main source of information to answer these questions is the *Dealroom.co*, a leading global platform for intelligence on startups whose authors provide comprehensive data on venture-backed startups in every country throughout the world, with a detailed breakdown by location, industry, technology, funding, founders, investors, and market value. As I am focusing on scaleups, which I here defne as startups that raised more than €1 million in funding, my team members and I retrieved a total of 13,851 scaleups headquartered in Europe from the Dealroom database. As for the territorial distribution of scaleups, we applied the EU-OECD classifcation of *Functional Urban Areas* (FUAs) (Dijkstra, Poelman, & Veneri, 2019; OECD, 2021). A FUA consists of a city (core) and its commuter zone and thus encompasses the economic and functional expansion of the city, with the great advantage of available corresponding economic data, such as population and GDP.

To analyze the European scaleup landscape, we matched the 13,851 scaleups retrieved from Dealroom database with their respective FUAs by using the Tableau software. We then applied three variables to measure cities' performance at the FUA level in terms of scaleups: the number of scaleups, the total funding of scaleups, and the number of scaleups with a market value more than €200 million. Based on this, we performed a cluster analysis to flter FUAs with considerable performance in terms of scaleups; in particular, we applied k-means algorithm, and this resulted in total of 166 FUAs (consisting of 12,472 scaleups), which were arranged in six clusters (see Table 6.1).

Whereas Global scaleup cities excel in every way and play an important role not only at the European but also at the global level, Top European scaleup cities perform less well in terms of the scaleups' numbers and market values and occupy a leading position only in Europe. Top European Emerging and Emerging scaleup cities feature relatively strong funding but lag far behind in growth, measured by the


**Table 6.1** Descriptive statistics of the scaleup city clusters in Europe, 2021

*Note.* Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https://dealroom.co and EU-OECD FUA classifcation. Design by author

number of scaleups with a market value of more than €200 million. In contrast, regional and local scaleup cities perform very weakly in all aspects.

#### **Towards Europe's Scaleups Geography**

#### *Skewed Distribution of Scaleup Cities in the European Scaleup City Landscape*

The skewed distribution of startups and scaleups, with a few companies concentrating most of the funding and the vast majority receiving very little, is also refected in the landscape of European scaleup cities. Of the 166 scaleup cities, only a handful—global and top European scaleup cities, 15 in total—concentrate 61% of the European scaleups, 71% of their funding, and 68% of the scaleups with a market value of more than €200 million (see Table 6.2).

Nevertheless, a scaleup city's development is a lengthy and complex process infuenced by a number of crucial factors. When a city begins to concentrate startups and scaleups and an ecosystem with universities, risk capital, entrepreneurial expertise, and supportive institutions evolves, the frst challenge is to maintain them to make the development self-sustaining. Yet regional science researchers—particularly those studying industrial clusters and districts (Castells, 2000; Saxenian, 1996)


**Table 6.2** The distribution of scaleup city clusters according to their main performance variables

*Note.* Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https://dealrom.co and EU-OECD FUA classifcation. Design by author

and more recently innovation and entrepreneurial ecosystems (Engel, 2014)—have long proven that once a critical mass of these factors is reached, the ecosystem evolves into a self-reinforcing system that is able to attract startups, scaleups, investments, and talents, frst from a larger region and later from around the world.

Therefore, on the one hand, competition between scaleup cities is about to grow to the point where the ecosystem becomes self-sustaining; on the other hand, beyond this point, it is also about to globally attract main resources, primarily talent and capital. Looking at the fgures on the distribution of scaleup cities in terms of performance indicators (see again Table 6.2), the ecosystem's development is in the initial stages in the vast majority of European scaleup cities. Most of these are trying to develop a self-sustaining ecosystem, whereas only few scaleup cities have reached the point where development becomes self-reinforcing and increasingly attracts global resources.

#### *West-East and Nord-South Gaps*

Although the distribution of scaleup cities by performance indicators conforms to the widely held claim that ecosystems are concentrated in a few hubs that hold the overwhelming majority of scaleups and funding, with a detailed analysis one can paint a different picture—one with strong territorial gaps. Europe is marked by a deep West-East and North-South divide, and even large metropolitan areas in Central and Eastern Europe and in Southern Europe lag far behind when it comes to the number of scaleups and the funding they raised.

In terms of the number of scaleups, the landscape is dominated by the large Western European capitals, which also fall in the cluster of Global and top European scaleup cities (see Fig. 6.4). With almost 5000 scaleups combined, Global scaleup cities—London, Paris, Berlin and Stockholm—concentrate 40% of scaleups in Europe. Top European scaleup cities—for example, Barcelona, Copenhagen, Dublin, Helsinki, Madrid, Amsterdam, Munich, Cambridge, Manchester, Oxford, and Zurich—host over 2500 scaleups, forming a further 20%.

In striking contrast, the 15 scaleup cities of Central and Eastern Europe—for example, Prague, Budapest, Tallinn, Vilnius, Gdansk, Poznan, Wroclaw, Cracow, Warsaw, Bucharest, Bratislava, Ljubljana, Riga, Sofa and Zagreb—offer a total of only 443 scaleups, equal to 3.5% of all European scaleups. Just for comparison, this lies above the corresponding values of Dublin (339 scaleups), but below those of Stockholm (489). Similarly, capitals in Southern Europe—Rome, Athens, and Lisbon—have put together fewer than 130 scaleups, which is in line with the values of Lausanne or Edinburgh.

In terms of total scaleup funding, however, the West-East and North-South divide is more pronounced (see Fig. 6.5). In the southern part of Europe, scaleups receive a relatively high level of total funding only in Barcelona (€5 billion), Madrid (€3.1 billion), and Milan (€2 billion), whereas the funding raised by the scaleups of Rome, Athens, and Lisbon jointly amounts to less than €1 billion (equal to the values of

**Fig. 6.4** The number of scaleups across the European scaleup cities, 2021. Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https://dealroom.co and EU-OECD FUA classifcation. Design by author

Toulouse or Malmo). Nevertheless, these numbers fall orders of magnitude below those of London (€57 billion), Paris (€22 billion), Berlin (€20 billion), and Stockholm (€13 billion), and also below those of Amsterdam (€7 billion) or Munich (€7 billion). With the West-East divide, the scaleups of Central and Eastern Europe have notable total funding only in Bucharest (€1.9 billion) and Tallinn (€1 billion), whereas they received less than €500 million in major capitals such as Warsaw, Prague, and Budapest, and less than €100 million in Riga, Bratislava, and Ljubljana. The combined total funding of scaleups in the 15 cities of Central and Eastern Europe comes to just about €6 billion, which corresponds to a mere 2.5% of all funding of European scaleups. For comparative purposes once more: This is equivalent to the funding of startups in Dublin alone.

**Fig. 6.5** The total funding of scaleups across the European scaleup cities, 2021 (in million €). Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https:// dealroom.co and EU-OECD FUA classifcation. Design by author

#### *Trends for Convergence Only in Western and Northern Europe*

Despite the almost oligopoly of very few scaleup cities, taking the size of the economy into account somewhat balances the picture (see Fig. 6.6). In terms of *funding density*—measured as total funding of scaleups (million €) per US\$1 billion GDP— Global scaleup cities take the lead: Berlin (€85.9 million), Stockholm (€81.4 million), and London (€67.3 million), whereas Paris (€24.2 million) seems to be an exception. At the top of Europe, however, stand towns with world-class universities, for example, Cambridge with €268.1 million funding per US\$1 billion GDP and Oxford with a corresponding value of 190.5. In addition, there are very high funding

**Fig. 6.6** Funding density across the European scaleup cities, 2021 (scaleups total founding (million €) per US\$1 billion GDP)*.* Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https://dealroom.co and EU-OECD FUA classifcation. Design by author

densities in other university towns, such as in Lausanne (115.7), Basel (55.3), Grenoble (37.1), Malmo (31.3), Geneva (32.9), and Leiden (22.1). Capitals in the Baltic region also excel when it comes to funding of scaleups relative to the size of the municipal economy, as in Tallinn (53.9), Helsinki (50.3), and Vilnius (26.4).

Yet Southern Europe's scaleup cities—with the exception of Barcelona (€22.9 million funding per US\$1 billion GDP)—have low funding densities: see Madrid (9.1), Milan (6.9), Athens (4.0), Lisbon (1.8), and Rome (0.9). Similarly, Central and Eastern Europe has only one capital with a noteworthy funding density, Bucharest (34.1), whereas the scaleups receive signifcantly less funding than one would expect given the size of their economies in other major cities of the region, such as in Prague (3.8), Warsaw (2.4), and Budapest (2.9).

In short, convergence marks the funding of scaleups relative to cities' economic power in Western and Northern Europe; in smaller university towns particularly, scaleups receive more funding than one would expect given the size of their economies. However, this trend can hardly be observed in the scaleup cities of Central and Eastern Europe and Southern Europe.

#### **Access to Talent in the Scaleup Cities of Europe**

#### *Locally Available Talent as a Driving Force Behind the Performance of Scaleup Cities*

Turning to my second research question—how access to locally available talent affects the scaleup city landscape of Europe—I have analyzed three variables: the number of startup founders who attended a university in the city; the number of startups created by founders who attended a university in the city; and the number of those founders who attended a university in the city and raised more than €10 million in funding. As for investigating the overall relationship between indicators of performance indicators and access to talent, I applied a linear regression model (y = mx + b) across the entire sample of 166 scaleup cities, designed the regression trend line, applied a 1-percent signifcance level (a = 0.01), and computed the coeffcient of determination (R2 ). With respect to the number of scaleups in the scaleup cities, all variables of access to talent ft extremely well (R2 values from 0.82 to 0.86). Similarly, there is a strong—though somewhat weaker—relationship between the total funding of scaleups and the access to talent (R2 values from 0.66 to 0.73), and the performance variable of the number of scaleups valued at more than €200 million also correlates with the access to talent variables at the same level (R2 values from 0.67 to 0.73).

#### *Decoupling the Eastern and Southern Parts of the Continent*

Given these strong correlations, it is unsurprising that the territorial landscape is again marked by the decoupling of the Eastern and the Southern parts of the continent, and that students create successful startups in the large Western and Northern European cities, such as Paris with more than 2150 founders, London with 1850, followed by Amsterdam, Berlin, Barcelona, and Stockholm with founder numbers ranging between 500 and 700 (see Fig. 6.7). Students in Madrid, Munich, Dublin, Copenhagen, Milan, Utrecht, Helsinki, Rotterdam, Zurich, and Vienna are also active in creating startups, with the numbers of founders who attended the cities' universities falling between 240 and 430. Traditional university towns have remarkably high values, as Cambridge and Oxford each have close to 500 founders, Malmo

**Fig. 6.7** Number of founders who attended a university in the scaleup city, Europe, 2021. Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https:// dealroom.co and EU-OECD FUA classifcation. Design by author

almost 200, and Leuven just fewer than 100. However, in capitals of Central and Eastern Europe, such as in Prague, Budapest, and Bucharest, the numbers of startup founders who attended a university in these cities are very low, between 65 and 75, with the exception of Warsaw, which has almost 200.

This landscape becomes more diverse if one examines the *founder density* as measured by the number of founders who attended the universities of scaleup cities per 100,000 inhabitants (see Fig. 6.8). On the one hand, Global scaleup cities such as Stockholm (23.3 founders per 100,000 inhabitants), Paris (16.7), London (14.8), and Berlin (13.2), as well as top European scaleup cities in the Southern part of the continent, such as Barcelona (13.5) and Madrid (6.3), hold rather modest values, whereas the Scandinavian, the Baltic, the Dutch, and the German scaleup cities have higher number of founders than expected based on their population size. In addition,

**Fig. 6.8** Founder density of university students across the scaleup cities in Europe, 2021 (number of founders who attended a university in the scaleup city per 100,000 inhabitants of the city). Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https:// dealroom.co and EU-OECD FUA classifcation. Design by author

some top European scaleup cities are even ahead of Global scaleup cities in this regard, as is the case for Amsterdam (25.6), Dublin (19.2), Helsinki (18.7), Copenhagen (18.5), and Zurich (18.1).

The university towns once again lead Europe, with Cambridge sporting the highest founder density (133.2 founders per 100,000 inhabitants), followed by Oxford (89.4), Maastricht (63.7), and Lausanne (49.8). Even smaller university towns have a relatively high density, as in Leuven (40.1), Aarhus (35.3), Malmo (27.6), Grenoble (24.5), and Leiden (24.3). In contrast, capitals in Central and Eastern Europe have some of the lowest values: Warsaw has 6.0 founders per 100,000 inhabitants, Prague 3.3, Bucharest 3.1, and Budapest 2.1.

## *Convergence in Scaling Opportunities in Western and Northern Europe*

The West-East divide also appears in terms of scaling opportunities, indicated by the variable number of those founders who attended a university in the city and raised more than €10 million in funding (see Fig. 6.9). Paris and London are by far the largest places in Europe for students to scale their startups and the corresponding fgures range from 550 to 750. Cambridge, Oxford, and Stockholm offer good opportunities for scaling, with the number of founders who studied in these cities' universities and received more than €10 million in funding falling between 180 and 200. Munich, Dublin, Barcelona, and Copenhagen are also popular places for growing scaleups, and the values here lie between 110 and 130.

**Fig. 6.9** Number of founders attended a university of the scaleup city who raised more than €10 million in funding, Europe, 2021. Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https://dealroom.co and EU-OECD FUA classifcation. Design by author

Yet by examining the *scaling rate* of the scaleups of student founders—measured as the number of founders who attended a university in the scaleup city and raised more than €10 million in funding compared to the total number of founders who attended a university in the given city—one can see that big cities do not hold the monopoly on pools of university students with entrepreneurial spirit (see Fig. 6.10). In Europe on average, around one in four founders who attended a university in the city raised more than €10 million in funding (26.8%). Almost every second founder in Lausanne and Cambridge and every third founder in Oxford, Zurich, Dublin, Paris, Stockholm, Munich, Copenhagen, and London have been able to grow and scale and raised more than €10 million in funding. Yet it is striking that some large scaleup cities—such as Berlin, Amsterdam, Rotterdam, Utrecht, and The Hague despite the huge number of founders who have attended a university of these cities, offer very weak and below-average opportunities to scale and grow, as only around one in ten of these founders received more than €10 million in funding.

Not only are there signifcantly fewer scaleup founder students in the capitals of Central and Eastern Europe, but those that there are also struggle with scaling, as only about one in fve founders raised more than €10 million in funding. Respectively, the scaling rate is 24% in Bucharest, 20% in Tallinn, 19% in Prague, 17% in Budapest and Bratislava, 15% in Warsaw and Riga, 10% in Vilnius, and 8% in Ljubljana. In other words, when startups founded by university students turn to scaleups in this region, they usually lack the capital and market to growth.

**Fig. 6.10** The scaling rate of the scaleups of student founders in selected European scaleup cities, 2021 (cities with more than 200 founders who attended a university in the scaleup city, scaling rate (%) = the number of founders who attended a university in the scaleup city and received more than €10 million in funding in relation to the total number of founders who attended a university in the scaleup city). Source: Author's own calculation based on Dealroom data retrieved June 2, 2021 from https://dealroom.co and EU-OECD FUA classifcation. Design by author

#### **Concluding Remarks**

In this study, I have reinforced the widely held claim that startup ecosystems are concentrated in a few hubs and that in Europe only a handful of scaleup cities hold the vast majority of scaleups and funding. However, with detailed analysis I have also revealed deep West-East and North-South divides, with major metropolitan areas in Central and Eastern Europe and Southern Europe lagging far behind in both the number of scaleups and the funding these scaleups have raised. Signs of convergence appear only in Western and Northern Europe, and university cities in particular perform remarkably well with respect to the number and funding of scaleups relative to their population and economic size. This is partly due to the good access to locally available talents that universities can provide, whereas in the scaleup cities of the lagging Central and Eastern European and Southern European region, students' weak engagement in entrepreneurship hampers the ecosystem's development.

Moreover, I have shown that the European scaleup city landscape is shaped by some strict rules. Firstly, *size matters*. Large European cities host not only huge number of scaleups but provide many funding and scaling opportunities. Researchers have long proven that big cities have better conditions for entrepreneurial ecosystems due to economic agglomeration effects triggered by larger population and greater densities. Yet not every large European city can beneft from this. The regional concentration of top universities, startups and scaleups, venture capital, entrepreneurial know-how, and supporting institutions tends to develop frst a selfsustaining and then a self-reinforcing system which, after reaching a critical mass, is able to attract investment and talent from all over the world. In Europe, the startup ecosystems in most of Global and top European scaleup cities have reached this critical mass and now their ecosystems appear to be evolving on their own, yet only few have turned into a self-reinforcing system.

Secondly, *location matters* as well. Size is not the only factor, as long-lasting West-East and North-South development disparities also prevail in the European scaleup city landscape, especially when one compares the performance of the scaleup cities with their population and economic size. In addition, the large cities of Southern and Central and Eastern Europe not only feature signifcantly fewer scaleups than the Western and Northern parts of the continent, but scaleups in these regions also struggle to access fnance and handle scaling and growth. In short, although the concentration of the entrepreneurial ecosystems with strong scaleup performance is the dominant trend, it is one deeply embedded in Europe's economic and territorial disparities.

Thirdly, *knowledge matters* too. A high number and ample funding of scaleups as well as good opportunities for scaling and growth are not a prerequisite for large cities, since many smaller towns in Western and Northern Europe can offer them an adequate ecosystem. Towns with world-class universities, in particular, are becoming serious competitors of the big players in the European scaleup city landscape. Although there are undoubtedly many factors infuencing the performance of scaleup cities, I have shown that one such determining factor is the upstream stemming from

the university students in the cities in question. As creating startups is almost a unique university cultural "genre," it comes as no surprise that university towns also have the highest values in every respect, be it in terms of number of founders, the amount of funding they raised, or densities relative to population. In contrast, the startup activities of the university students in some large cities are rather modest, whereas the East-West and North-South divide still predominates in this area.

In short, scaleup cities in Southern and Central and Eastern Europe largely lack the upstream of university students, which is partly why their scaleup performances are lagging far behind. University cities, especially in Western and Northern Europe, on the other hand, have very good scaleup performance due to the extremely high level of student engagement in creating startups. This is one reason why one can observe some signs of convergence in their scaleup city landscape. The big scaleup cities are, however, in a unique position. Their size has raised them to a stage where the startup ecosystem becomes a self-sustaining—in a few cases even a selfreinforcing—system. Hence, despite having relatively modest upstream from their own universities, particularly in relation to the size of their population and economy, they can attract talent from all across Europe.

**Acknowledgments** The research, on which this study is based, was also supported by the Pallas Athéné Domus Meriti Foundation (PADME, Hungary) and the John von Neumann University, Kecskemét.

#### **References**


**Prof. Dr. Zoltán Cséfalvay** is head of the Centre for Next Technological Futures at Mathias Corvinus Collegium (Budapest), where he gives lectures and conducts research on digitalisation, robotisation, artifcial intelligence, innovation ecosystems and startups in Europe. Previously, he worked as senior researcher at the Joint Research Centre of European Commission in Seville (2019–2020), he served as ambassador of Hungary to the OECD and UNESCO in Paris (2014–2018) and as Minister of State for Economic Strategy in Hungary (2010–2014). He was a professor of economic geography at Andrássy University Budapest (2002–2010) and has been a professor at Kodolányi János University in Hungary for more than two decades. As a research fellow he worked in Budapest, Vienna, Munich, Heidelberg, and Cardiff. Recently he has published his latest book—TECHtonic Shifts—on the current industrial revolution.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 7 Assembling the Geographic Information Market in the United States**

**Luis F. Alvarez León**

This chapter explores the construction of geographic information markets in the U.S. by focusing on two key elements. These are (1) the development of mechanisms for two kinds of interoperability: legal interoperability (such as the acquisitions process between different government agencies at different levels) and technical interoperability (such as data formats and spatial data infrastructures), and (2) the construction of Intellectual Property (IP) regimes. By exploring these two elements, the chapter shows how information markets (in this case, specifcally geographic information markets) are shaped by the combination of institutional, legal, and technical frameworks established within territorial jurisdictions that allocate property rights, enable the dissemination of standardized data, and create conditions for the development and circulation of commercial informational products.

In the past decade geographers have increasingly centered markets as objects of analysis. This has been particularly productive for economic geography, which had hitherto exhibited a historical bias towards the sphere of production, to the relative neglect of the sphere of exchange. Berndt and Boeckler have made a compelling case to study markets and marketization as geographical processes, providing conceptual tools to examine how markets come together in space as heterogenous, deeply situated economic formations (Berndt & Boeckler, 2009, 2012; Boeckler & Berndt, 2013). Although this research agenda promises to deepen our understanding of the spatialities of capitalism, I argue in this chapter that further attention must be paid to the geographical dimensions present in the development of information markets. Understanding information markets geographically is particularly important because with the rise of the digital economy they have become central vehicles for the distribution of goods and services as well as the production and circulation

L. F. Alvarez León (\*)

Department of Geography, Dartmouth College, Hanover, NH, USA e-mail: luis.f.alvarez.leon@dartmouth.edu

<sup>©</sup> The Author(s) 2024 131

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_7

of new forms of knowledge. Yet, their spatial dimensions are often hidden from view and obfuscated by popular terms such as "the cloud" and "cyberspace".

The pivot towards the study of markets has coincided with a period of productive examination of the multiple spatialities of digital technologies, encapsulated by the rise of the subfeld of digital geographies. In this context, geographers have examined a range of spatial aspects of the digital, from its infrastructural and economic dimensions (Moriset & Malecki, 2009), sociospatial divides (Graham, 2011; Graham & Dittus, 2022), ability to reconfgure networks of economic relations and reshape industries (Alvarez León & Aoyama, 2022; Glückler & Panitz, 2016a, b), co-constitutive nature with space (Kitchin & Dodge, 2011)—particularly in cities, which are increasingly computationally mediated (Graham, 2005; Mattern, 2021)—, the intensifed representation of places through digital technologies and social networks (Crampton et al., 2013; Payne, 2017; Wilmott, 2016), the rise of new paradigms of urbanism mediated by digital platforms (Barns, 2020; Clark, 2020), the role of algorithms in producing new geographies (Kwan, 2016) and the persistence of glitches that reveal fssures in said geographies (Leszczynski & Elwood, 2022). This diverse body of work has enriched our understanding of the multiple coconstitutive relationships between digital technologies and space. However, one area that remains relatively underdeveloped, and is nevertheless central to the spatialization of digital information is the geographic dimensions of regulation and market-making as they specifcally manifest in the digital economy (Alvarez León, 2018).

In this chapter I argue that the interplay between technical factors and regulatory frameworks (specifcally IP regimes) constitutes a mechanism that defnes the roles of market actors, enables, and often binds them to operate with circumscribed functions within jurisdictional constraints—all of which can be spatialized at different scales. This argument is inscribed within a budding research agenda in Economic Geography that focuses on geographic dimensions of law in economic globalization (Barkan, 2011; Sparke, 2013). More specifcally, I build on scholarship examining how IP and other specialized legal regimes are instrumental in underpinning marketmaking, and capitalism at large (Christophers, 2014a, b, 2016). Furthermore, since law does not operate in a vacuum, the arguments developed here take seriously the technological architecture of digital goods and services, with a particular emphasis on geographic information. This integration of legal, institutional, and technological factors is intended to contribute to the project of developing a fuller political economy of the *geoweb*, or the myriad forms of geographic information that circulate on the Internet (Leszczynski, 2012), which has become increasingly central in the construction and operations of the digital economy. More broadly, identifying the specifc mechanisms through which technological innovation, knowledge generation, and territorialized legal frameworks constitute the geographic information market in the U.S. can help understand, govern, and regulate other digital information markets across geographies and domains.

The frst section examines how interoperability is central to the construction of a market for geographic information in the U.S. Two specifc types of interoperability are analyzed through their impact in the process of market creation: legal and

technical interoperability. The chapter explores frst the issue of legal interoperability, or how laws and policies regulating geographic information at different scales (national, state, county, city) operate together in the commercialization of this good. The focus then shifts to discuss technical interoperability, or the mechanisms that enable the production and dissemination of standardized and homogeneous data. Two specifc elements are highlighted: the TIGER fle format developed by the U.S. Census Bureau, and the National Spatial Data Infrastructure, an overarching architecture for the standardized production and distribution of geographic information.

The second section of the chapter focuses on IP regimes of geographic information in the U.S. This examination focuses on the national scale, and particularly on the works produced by the Federal Government. The chapter proceeds to analyze the commercial aspects of the geographic information collected by two of the principal Federal agencies engaged in this activity: The U.S. Geological Survey and the U.S. Census Bureau.

Together, the IP regimes of geographic information produced by governments at different scales, combined with the mechanisms developed for legal and technical interoperability provide the architecture of the geographic information market in the U.S. By focusing on the relations and interactions between these elements, the present chapter advances the understanding of information markets grounded in technical and institutional dynamics shaped by the legal and political economic context of each particular jurisdiction. In the case of the U.S., the dynamics between the legal foundations of IP, the relationship between different branches and levels of government, their role in the market as producer and/or competitor, interact with the institutional logics regulating data production to create the conditions for a growing geographic information market and geospatial economy. This chapter shows how the construction of digital information markets is far from a spontaneous process and more than a technical one, since it is actively shaped by the legal, political, economic, and institutional conditions that are anchored in territorial jurisdictions and simultaneously unfold across administrative scales. Ultimately, understanding how information markets are assembled, and the geographic dimensions of this process, can help illuminate some of the key dynamics of a capitalist economic system that is increasingly reliant on the commodifcation and digitization of knowledge intensive goods.

#### **Interoperability as a Building Block of Market Construction**

#### *Legal Interoperability*

The legal landscape regulating geographic information in the U.S. is characterized by the interaction between rules set at various levels by an institutional confguration that includes, among others, federal and state laws, governmental initiatives, federal, state, and municipal agencies, administrations, and decisions made by courts at various levels in the state and federal systems. Legal interoperability refers to alignment and harmonization of different legal frameworks, and which allows actors and organizations across jurisdictions to streamline the process of working together. This harmonization can take place vertically as well as horizontally across political scales. For instance, the National Interoperability Framework Observatory of the European Commission describes this relationship across scales in the following terms: "[Legal interoperability] might require that legislation does not block the establishment of European public services within and between Member States and that there are clear agreements about how to deal with differences in legislation across borders, including the option of putting in place new legislation" (National Interoperability Framework Observatory and European Commission, 2023, n.p.).

Depending on its state of development, legal interoperability can be either an impediment or a facilitator to the adequate circulation and use of geographic information in society (Onsrud, 1995, 2010). Creating such conditions is critical for the construction and operation of a market that relies on the continuous recombination of informational inputs and their transformation into innovative applications. Therefore, to understand the confguration of the geographic information economy of the U.S. it is essential to identify how interoperability enables this process. This subsection focuses on legal interoperability, which works in conjunction with technical interoperability, covered in the following subsection.

Statutes such as Copyright Law (Title 17 of the U.S. Code) outline the protections that apply to geographic information depending on factors such as its producer and format. For example, data produced by the Federal Government is considered "government work" and in part of the public domain. On the other hand, Copyright applies differentially to data produced by private parties or subnational governments, often depending on the type of geographic information. Maps, for instance, have been a protected category in Copyright Law since the frst act of 1791. However, as maps have become digitized, they are often divided into various components, principally the pictorial or graphic map and the underlying database. While Copyright Law continues to protect pictorial maps, the protection of databases is much more contingent. In an increasingly digitized economy, this uneven protection has become a source of contention.

Databases, which in the era of big data make up the majority (and often the most valuable) share of geographic information, are not necessarily protected by Copyright in the U.S. Resulting from a Supreme Court of the U.S. decision in the case of Feist v. Rural in 1991, databases are considered compilations of facts and thus fail to meet the originality requirements to be protected by Copyright. Consequently, databases are often under the much more variable protection of Contracts Law, which may in some cases result in even stronger safeguards than Copyright Law (Karjala, 1995; Reichman & Samuelson, 1997).

The distribution of geographic information produced by the government such as census data and topographic maps is in principle regulated by law. However, there is often fexibility for practice, clarifed by policy documents such as the OMB Circular A-130 (discussed in the second section of this chapter), which prohibits federal agencies from deriving additional fnancial resources from the distribution of government information and instructs them to recover only development costs (Branscomb, 1994, p. 161).

While this regulatory framework places most of the geographic information produced by federal agencies in the public domain, there remains a great deal of variation in the practices and rules involving states, counties, and municipalities. Within the states, this is often settled in the courts at various levels from trial, to appellate to state Supreme Courts. However, depending on the jurisdiction where a case is heard, it can move through the federal or state court systems. Some of these cases may eventually be adjudicated in the Supreme Court of the U.S. This was the trajectory of the landmark case on databases Feist v. Rural, which was initially decided in the U.S. District Court for the District of Kansas in 1987, and subsequently overturned by the Supreme Court of the U.S. in 1991. Due to the jurisdictional hierarchy in the judicial system, decisions made in the nation's Supreme Court can set a legal precedent for the entire country. Thus, while courts adjudicate cases and rule on specifc issues relative to geographic information, such rulings are not necessarily consistent or all encompassing, and may be contingent on specifc case histories and jurisdictional variations.

As a result of this complex patchwork of regulations and jurisdictions, organizations such as the National States Geographic Information Council work in the interstitial space provided by the judicial system and focus on developing a standardized set of practices for geographic information across the country. While the legal aspect of interoperability remains elusive to the intrinsically fragmented government system of the U.S., it is complemented by technical advances facilitating the nationwide production and use of standardized geographic information. Overarching projects such as the National Spatial Data Infrastructure (NSDI) can partially bridge the gaps between legal regimes governing geographic information in the U.S. The NSDI seeks to streamline processes, enforce standards, and harmonize practices in the production, distribution, and use of geographic information throughout the country. This and other initiatives to advance technical interoperability have become key elements in the geographic information economy of the U.S.—especially since the ascent of digital information as a key economic asset. In part this is because the distribution and application of geographic information require up to date guidance, which the law is often unable to deliver.

Thus, while legal interoperability is a desirable objective, it must be complemented in practice by technical interoperability. Building information markets, then, requires the interplay of legal and technical interoperability, even while each moves at different rhythms and focuses on disparate elements, such as standards, formats, rules, and practices for geographic information throughout the country. The next subsection discusses two building blocks of technical interoperability for geographic information in the U.S.: (1) TIGER, a format created by the Census Bureau and (2) the National Spatial Data Infrastructure.

#### *Technical Interoperability: Standards and Formats*

#### **TIGER format**

Known to most users of U.S. Census Bureau data, the TIGER1 data format was frst developed during the 1960s and 1970s by the U.S. Census Bureau. Its development was motivated by two linked concerns: (1) to digitize the Census process, and (2) to create a national cartography of roads and boundaries for the decennial Census that could then be linked to all other data collected by the Bureau (Bevington-Attardi & Ratcliffe, 2015; Cooke, 1998). The resulting database produced an impact well beyond its initial objectives, and "has generated the largest civilian use of maps and mapping technology supported by the United States Federal Government" (Bevington-Attardi & Ratcliffe, 2015, p. 63). This technological innovation took place in a number of research teams in the Census Bureau and is a result of the productive interaction between staff and resources at this federal agency and research universities—particularly between the Bureau's New Haven Census Use Study of 1967 and researchers at Yale University (Cooke, 1998).

TIGER is an example of how the production of knowledge is mediated by the specifc confguration of the institutions that produce it. In this case, the institutional geography of the Census Bureau played an important role in creating the conditions for this technological breakthrough. As Cooke has argued, the reconstitution of the New Haven Census Use Study into the Southern California Regional Information Study and its consequent relocation from Connecticut to Los Angeles provided this group with the relative freedom to innovate within the centralized governance structure of the agency (Cooke, 1998, p. 54). From these conditions emerged an innovative fle format capable of representing topology in a practical and effcient way and that was easily adapted to new computing technologies. Furthermore, the fact that DIME/TIGER was created by a government agency was instrumental in the diffusion, national coverage, and massive use of this format.

In parallel to this, California-based ESRI (a leader in the GIS industry) developed a separate fle format for their software ArcView in early 1990s, the *shapefle*, which would become the standard for non-topological geographic information (Theobald, 2001). While the shapefle is proprietary and therefore its development and evolution are ultimately controlled by ESRI, the company has published its specifcations, adding a degree of openness to the format. The shapefle has become a global standard of use due to a combination of its feature-centric manipulation enabled by an increase in computing power and the market dominance of this company's software packages, such as ArcGIS and ArcView (DiBiase, 2014; Theobald, 2001).

<sup>1</sup>TIGER stands for Topologically Integrated Geographic Encoding and Referencing. Prior to this acronym, the format was initially known as Dual Incidence Matrix Encoding, and later Dual Independent Map Encoding (DIME).

On the other hand, since their appearance in the 1980s, TIGER format fles have become crucial in collecting, organizing, and distributing topological geographic information, particularly by government agencies. Its development by the Census Bureau, use as a store for all topology, and linkage to its vast catalog of statistical data made the TIGER format a de facto standard across U.S. government agencies and administrations. Furthermore, as argued by Cooke, this format's impact as catalyst for of the geographic information economy was evident already in the 1990s: "[TIGER's] success has put the world's most useful general purpose spatial database into the hands of more users than any other GIS data resource. The current boom in business geographics is only possible because of the groundwork laid by the Census Geography Division in building TIGER" (Cooke, 1998, p. 56).

These two concurrent developments—TIGER, by a government agency, and the shapefle, by a private frm—have often been combined and distributed together, as the Census Bureau has done since 2007 through the distribution of TIGER/LINE shapefles. This increases the reach of both formats and makes them easier to download and manipulate by GIS users. However, despite the success and wide distribution of this combination, the shapefle remains a proprietary format whose "openness" is mostly a pragmatic decision resulting from the market power of ESRI's software package ArcGIS. In this context it should also be highlighted that the efforts of the Census Bureau in developing a topological standard for digitized geographic information created the initial conditions for massive distribution of geographic datasets and enabled government agencies across the U.S. and private users everywhere to collect and distribute geographic information with increased effciency. In this way, the innovation in knowledge and technology that emerged from the informational needs of the Census Bureau became a fundamental building block for the construction of the geographic information economy in the U.S., and beyond.

#### **The National Spatial Data Infrastructure**

A second key element in developing technical interoperability for geographic information in the U.S. is the National Spatial Data Infrastructure (NSDI). This nationwide project started in 1994 with President Clinton's Executive Order 12906 (the Plan for the National Spatial Data Infrastructure). This order was issued in recognition that digitized geographic information was not only increasing in value but was rapidly becoming essential for all types of decision-making in government as well as in industry. The NSDI thus responded to the need for standardizing the collection and distribution of geographic information across agencies and scales of government in the U.S. As a collection of technical standards, policies, and procedures coordinated by the Federal Geographic Data Committee, the NSDI's goal is to align institutional practices over geographic information from the federal level. This is particularly important considering the disparate regulations, capacities, and incentives that shape the practices of production and distribution of geographic information across governmental institutions.

While the TIGER format developed by the Census Bureau is centered on the technical specifcations of geographic information digitization and encoding, the NSDI encompasses the broader architecture in which said information is collected and transmitted within the U.S. government. Together, these two elements combine to increase the technical interoperability that underpins the geographic information economy in the U.S. This combination can sometimes lead to trade-offs between usability and openness. As was mentioned above, the Census Bureau opted to distribute TIGER shapefles due to the compatibility with most Geographic Information Systems. The question here is whether the higher restrictiveness implicit in favoring a private frm's proprietary format is counterbalanced by the widespread usage this very format may foster. This is not only a technical, but a political decision that can have ramifcations for an entire spatial data infrastructure, in this case for the U.S.

In fact, similar considerations have been central to the design of INSPIRE, the spatial data infrastructure of the European Union. INSPIRE has developed a collection of standards and procedures aimed at producing uniform geographic information datasets across all member states. Part of this overarching project is the use of the GML, or Geographic Markup Language, fle format. This is a type of encoding for spatial data based on XML language and developed by the Open Geospatial Consortium. It was selected by INSPIRE due to its status as an open data format. However, this normative choice comes with its own set of trade-offs. In the hopes of making the geographic information in the EU as open as possible and allowing its access by the broadest number of users, INSPIRE's choice of the GML format inadvertently made it more restrictive in practice. This is because GML generally requires a high degree of technical expertise and is not as compatible with many GIS programs as some proprietary formats.

In contrast to INSPIRE, the NSDI's support of the GML format has been more gradual. While the openness of the format can increase technical interoperability between geographic information users and producers, its technical specifcations remain beyond the reach of most users. A pilot study done at the Geography Division of the Census Bureau attempted to "utilize the GML standard to organize and present national scale TIGER data" (Guo, 2013, p. 91). This study found that such utilization still has major issues related to data volume, comprehensive data organization, and document naming (Guo, 2013).

Considering the diffculties of transitioning to, and enforcing, a truly open format that can operate across a nationwide spatial data infrastructure like the NSDI, the trade-offs made by most government agencies in the U.S. are telling in some key respects. While the Census Bureau's own TIGER database is still the "the most comprehensive geographic dataset with national coverage in the US" (Guo, 2013, p. 82), it is noteworthy that the Bureau has supported its release in ESRI's proprietary shapefle format as well as a variety of other popular formats, such as Google's KML, which became an open standard in 2008 (Kirkpatrick, 2008).

This decision by the Census Bureau to opt for widespread distribution over strict openness is suggestive of the larger philosophy characteristic of U.S. governmental agencies' involvement in the geographic information economy. In the development of technical interoperability, they have opted to maximize the circulation of geographic information produced by the government. This decision encapsulates a powerful logic underpinning the construction of the U.S. geospatial market, where the Federal Government's technical decisions make its informational products widely available while catalyzing economic externalities that can beneft individual frms like ESRI or Google. This practice aligns with prevailing policy regarding the US government's role in the information economy as a producer of informational inputs, with boundaries defned through the legal instruments discussed in the next sections of this chapter.

#### **Government Works and the Federal Level**

#### *Legal Status of Federal Government Works*

Under Title 17, section 105 of the U.S. Code, the category of Government works2 in the U.S. is part of the public domain, which means that no actor can exert Copyright protections and thus ownership over it. This allows for the dissemination, transformation, and use of government works by anyone, for commercial and noncommercial purposes, both within and outside the U.S. Abroad, however, the U.S. government reserves the right to assert Copyright of its works (U.S. Copyright Offce, n.d.; U.S. Government, n.d.). This legal regime covers informational works of any kind produced by the Federal Government of the U.S. not considered 'classifed' due to national security.

Historically, U.S. Federal agencies have only charged users for reproduction costs to maximize the public access to government information. However, during the 1980s and 1990s there was a policy shift to pricing based on the public's willingness to pay, which was met with stiff resistance from civil society groups. Soon after, the Offce of Management and Budget, through Circular A-130 reversed this trend by instructing "government agencies to recoup only the costs of reproduction of government information and not to derive additional fnancial resources to recover development costs" (Branscomb, 1994, p. 161). Thus, while the Federal Government may charge for information, it may only do so strictly to cover costs of reproduction. This limitation in revenue generation is the defning quality that establishes the Federal Government's role as an information producer and prevents it from competing directly in the market for informational goods. The Federal Government's information production is fnanced through taxes and made publicly available to fulfll three main goals: (1) disseminate public information, (2) support government decision-making, and (3) produce inputs for commercial development.

It is particularly the third point that is key to construction of the informational economy of the U.S. As noted by Wells Branscomb, the limited scope of action of

<sup>2</sup>Except for Standard Reference Data produced by the Secretary of Commerce, as indicated in the Standard Reference Data Act of 1968.

the Federal government with respect to the commercialization of information is emphasized in OMB circular A-130, which "also warns government agencies not to interfere or attempt to restrict secondary uses of information resources, leaving the private sector to take what it will and reproduce it either as is or with value-added services" (Branscomb, 1994, p. 161). Such explicit delimitation establishes a clear division of labor in the U.S. information economy where the Federal Government is the supplier of informational inputs to the private sector.

The rules mentioned above not only shape the informational economy in the U.S. in general terms, but the specifc markets for different kinds of information, such as geographic information. While geographic information is a constantly expanding category, it can be defned by data that are either directly georeferenced or somehow linked to specifc locations and places. This includes a vast array of spatial representations that range from maps to aerial and satellite imagery to climatologic, demographic, statistical, and economic datasets. Increasingly, geographic information includes data produced by users through digital technologies such as mobile phones and social media applications, and disseminated through online portals, all of which allows for their rapid and effcient transmission, transformation, and recombination.

The technological change introduced by digital and later networked technologies has important implications for the geographic information economy, and particularly for the role of the Federal Government as a producer of this good. For one, these technologies make it easier to collect, organize, and distribute information. This lowers the cost of public distribution from single access points, such as the Census Bureau's American FactFinder, which hosts demographic, economic, and statistical data, or the USGS's Landsat Earth Explorer, an online archive of satellite imagery.

On the other hand, these technologies place an increased burden of immediacy, expediency, and effciency on government information producers. While strictly speaking U.S. Federal agencies are not market actors, they compete with private services for the online attention of users. These services, such as Google Earth, Google Maps, and ArcGIS.com generally offer the same government-collected primary data repackaged in more accessible user interfaces and supplemental features. This supplier/competitor online relationship between government agencies and private frms exemplifes some of the reshuffing precipitated by new technologies in the geographic information economy.

While the role of the Federal Government as information producer in the information economy is clearly delimited by regulations such as OMB Circular A-130 mentioned above, it is also subject to change through the relations and linkages to other market actors. In the face of technological change and new demands placed by society in terms of access and distribution, Federal agencies often partner with private frms for the collection and dissemination of public information. While this is a common practice, government partnerships with the private sector have raised important questions about the control of the informational resources and the role of those private frms as competitors in the market.

A suggestive example is the merging of data from two online portals of the Federal Government, Data.gov and Geodata.gov, in 2010. In 2005 the Department of the Interior had awarded the contract to develop Geodata.gov to the private frm ESRI, the market leader in geographic information systems (U.S. Department of the Interior, 2005).3 Then in 2010 the same frm was awarded the contract to link Geodata.gov with the existing government portal Data.gov (Schutzberg, 2010). This represented an important step in developing a "one-stop shop" for the concentration and distribution of all types of geographic information produced by the U.S. Federal Government.

However, ESRI's involvement in linking the data and maintaining this service led to a controversy due to the frm's status as the GIS industry leader as well as the favored access and input control suggested by the frm's maintenance of the Geodata. gov portal (Fee, 2010; Pomfret, 2010). By maintaining this portal ESRI would be in a position to redirect user traffc to their free online service ArcGIS.com, which would in turn allow users to create map mashups using data layers from Geodata. gov (Sternstein, 2010). The government would pay ESRI \$50,000 to undertake the data linkage project. This was an unusually low fgure compared to the true cost, which was estimated by the frm's president, Jack Dangermond, in the tens of millions of dollars—an amount which, as he explained, would be supplemented through licenses (Sternstein, 2010).

The connection with ESRI's online service, compounded by the low cost of the contract drew criticism from some members of the geospatial community, who saw this as preferential treatment to a market leader that amounted to the government funneling users of a public service to a private platform while in the process generating traffc and advertisement benefts for the said platform (Fee, 2010; Pomfret, 2010). While ESRI later issued a clarifcation stating that Geodata.gov would be only one of the many sources of spatial data available to users of ArcGIS.com (Schutzberg, 2010), this episode highlights the tenuous line separating the production of public information by the government and the commercial implications that can arise from the involvement of private companies in its online distribution.

#### *Geographic Information in the U.S. Geological Survey and the U.S. Census Bureau*

The U.S. Geological Survey (USGS) is part of the Department of the Interior. It is a scientifc agency whose principal mission is to collect and distribute reliable geographic information for the understanding of the Earth, hazard mitigation, resource management, disaster prevention and quality of life improvement (U.S.Geological Survey, 2014). The USGS furthers these goals through outputs such as topographic

<sup>3</sup>This is known as Version 2 of the Geodata.gov portal. ESRI had previously been awarded the contract for Version 1, launched in 2003.

maps, digital elevation models, soil analysis, orthophotography, aerial and satellite imagery, among others. As a Federal agency, its informational products are considered "government works" under section 105 of Title 17 of the U.S. Code, and thus constitute public information, except for certain primary data sourced from private frms under contract.

Part of the mission of the USGS is to maintain a public access point for their informational products. In recent years the USGS has pioneered several online initiatives to make comprehensive spatial datasets available to the public. One of the USGS's principal projects is the National Map, made in collaboration with local, state, and federal agencies. This online portal hosts "a seamless, continuously maintained set of public domain geographic base information that will serve as a foundation for integrating, sharing, and using other data easily and consistently" (Dewberry, 2012, p. 31). In addition to the National Map, USGS has partnered with NASA to administer the Landsat satellite program and to offer the entirety of their imagery archive through the Earth Explorer portal. While most of the data can be directly downloaded, imagery that is not yet online can be requested for digitization for the charge of reproduction costs (U.S. Geological Survey, 2016). This constitutes a peerless archive of publicly available satellite data dating to the 1970s and spanning the entire globe.

The USGS engages with many local, state, and federal government agencies, as well as and private actors and other sectors of the public to determine their needs for geographic information and assess the potential benefts. While its aim is to further scientifc endeavor, it does so with a keen eye on the applications, societal, and economic impact of its informational products. For example, for the National Enhanced Elevation Assessment, which collects updated elevation data for the entire country, the USGS conducted a detailed cost-beneft analysis that included the full documentation of "business uses for elevation needs across 34 Federal agencies, agencies from all 50 States, selected local government and Tribal offces, and private and not-for proft organizations" (USGS, 2014). The fnal report, conducted by the consulting frm Dewberry (2012), identifed a beneft for 27 business uses ranging from management of food risks, infrastructure, and construction, to urban and regional planning as well as health and human services. Table 7.1 shows these 27 business uses considered in the National Enhanced Elevation Assessment of the USGS. The benefts estimated across these business uses ranged from a conservative fgure of \$1.18 billion to a potential of \$12.98 billion. According to this report the annual combined highest net beneft for federal, state, and non-governmental actors had a beneft/cost ratio of 4.728 for every dollar spent, yielding \$795 million per year (Dewberry, 2012, p. 8).

This economic calculation is indicative of the general operating practices of the USGS and shows awareness of the agency's role as the centerpiece of a 'system of engagement' in which geographic information is the key resource and catalyst of economic activity. As indicated by a senior executive at the USGS National Geospatial Program, this and other federal agencies have adopted an 'entrepreneurial' strategy, seeking a return on investment, and avoiding competition with the


**Table 7.1** Business uses and estimated benefts of the National Enhanced Elevation Assessment (in \$US Millions)

*Note.* Source: Adapted with data from Dewberry (2012, p. 5). Design by author

private sector to do things "better", but rather doing them "differently".4 This is consistent with the USGS's complementary role in the market as information producer whereby it connects the interests of local, state, and national actors, private and public while aiming to balance their needs. As suggested by the wide range of business uses indicated above, one of USGS's priorities is to nurture the market for geographic information by supplying informational inputs with an explicit consideration for the development of secondary applications.

Located within the Department of Commerce, the U.S. Census Bureau is a federal agency whose mission is to "serve as the leading source of quality data about the nation's people and economy" (U.S. Census Bureau, 2023). These data are collected through projects such as the constitutionally mandated Decennial Census, the Economic Census, the Census of Governments, the American Community Survey, and a number of other surveys and economic indicators (U.S. Census Bureau, 2023). While the Census Bureau is not strictly a mapping agency, it has played a fundamental role in the production of geographic information in the U.S. This is a function of the Bureau's need to aggregate and georeference their data at scales ranging from states to census tracts, block groups, and blocks, which is essential to accomplish its four principal uses:


The comprehensive mapping of the Bureau has been enabled by its development of technical innovations, such as the TIGER/LINE format, a cornerstone of technical interoperability in the U.S. geographic information economy, as discussed earlier in the chapter.

Like the data produced by the USGS, the data collected by the Census Bureau is considered under the category of "government works", which places them in the public domain and not protected by the U.S. Copyright Act. However, to a greater degree than other federal agencies, the Bureau places a clear boundary around publicly available to safeguard the privacy of respondents by enforcing confdentiality over data that may be personally identifable. Publicly available data comprise those at the scales of state, city, highly populated census tracts, and block groups. On the other hand, data from thinly populated census tracts and blocks are considered confdential.

The operations of the Census Bureau are bound and regulated by two laws: Title 13 and Title 26 of the U.S. Code. Title 13 specifes the operations of the Bureau and establishes its mandate of confdentiality, while Title 26 regulates the provision of

<sup>4</sup> Interview with senior personnel at the USGS National Geospatial Program. March 2016.

tax information to other federal agencies, including the Census Bureau. The specifc content of the questions in the Census and the budget to carry it out are subject to Congressional approval, which entails a continuous process of negotiation, and can often lead to heated political controversies, such as the decision taken during the Trump administration to include a citizenship question in the 2020 Census.

While the Census Bureau is a federal agency, its data collection and operations throughout the national territory require engagement with agencies at all levels of government. One key reason for this is that much of the geographic information at the local scale, which is considered the most valuable, is sourced directly from counties and municipalities. Cross-scalar engagement presents challenges for the Bureau, since it must often negotiate the acquisition of the rights and licenses to data that—unlike at the federal level—are not covered under the government works designation, but by a patchwork of state and local regulatory and property regimes.

Unifying and standardizing these diverse data sources requires a combination of organizational and technological strategies. For this purpose, the Census Bureau developed an in-house platform to verify addresses using GPS. Furthermore, organizationally, each regional offce coordinates the acquisition of data with local governments and performs quality controls over each dataset.

The Census Bureau produces these data following its constitutional mandate which sets a rigid schedule and well-defned objectives. Yet, like the USGS, the Bureau is quite aware of the commercial value of its informational products. As indicated by a senior employee, the Census Bureau's informational outputs have helped catalyze the development of widely used cartographic services, such as the Thomas Brothers Atlas (later purchased by Rand McNally), and Google Maps, both of which use TIGER/LINE topological data as primary inputs.5 Furthermore, the economic, demographic, and social statistics produced by the Census Bureau are of great value for decision-making in both the government and private industry. The Economic Census, for instance, is particularly tailored for its commercial applications by a wide range of market actors. The Bureau defnes the offcial count produced by this endeavor as "[t]he foundation for business activity across the U.S. economy" (US Census Bureau, 2018). Recognizing its value, the Bureau has divided Economic Census data in fve categories for which they have outlined a corresponding set of specifc uses, totaling 15. These uses cover a range of activities from measuring GDP to promoting small business and furthering local economic development. These uses, along with the data categories they belong to are reproduced in the Table 7.2.

It should be noted that the label of specifc "uses" as employed by the Economic Census confates two different classifcations: entries such as Business Marketing can be considered direct applications of the data, while others such as GDP can be understood as indicators generated through specifc variables. This categorical fuzziness notwithstanding, the language employed by the Bureau in identifying such "uses" suggests an attention to the "actionable" qualities of the data collected by

<sup>5</sup> Interview with senior personnel at the U.S. Census Bureau. March 2016.


**Table 7.2** Data categories and specifc uses of the Economic Census

*Note.* Source: Adapted from U.S. Census Bureau (2021). Design by author

this institution, and particularly to their potential in catalyzing economic activity. The utilitarian rhetoric used by the Census Bureau underlines how the data collection and distribution activities of this and other Federal agencies, such as the USGS (whose data uses were catalogued above) are simultaneously informed by the imperative of public use and considerations for market potential. Beyond encouraging the diversifed application of Economic Census data, this rhetoric has a key function in the institutional logic of the Census Bureau when it is leveraged in budgetary and funding negotiations with Congress.6

In sum, while the Census bureau and the USGS are Federal agencies bounded by law and limited in their market action, they are nevertheless embedded in market logic. This leads them to deliberately take on the role of information producers and, beyond dissemination public information to for government and public use, provide inputs directly aimed at developing a broad cross-sectional geographic information economy. While this market logic may not be the main institutional guiding force, it underlies their strategy and action, and is woven throughout the documents, operations, and data produced by agencies like the USGS and the Census Bureau. This is in large part due to the legal and regulatory status limiting the Federal Government from explicitly participating in the market action while orienting the information production of "government works" towards the public domain and the catalysis of economic activity.

<sup>6</sup> Interview with senior personnel at the U.S. Census Bureau. March 2016.

#### **Conclusion**

This chapter has shown how IP regimes and other regulations have the power to shape information markets by defning actors and outlining their functions within specifc jurisdictions. In the U.S., the prevailing IP regime assigns the role of information producer to the U.S. Federal Government and limits it from participating in the market as competitor. By enforcing the public information regime known as "government works", U.S. Copyright Law simultaneously creates the conditions for the Federal Government to fulfll its mandate to serve the public and engage in the production of inputs for the information economy.

This IP regime is underlined by a separation between the information production and consumption where the government subsidizes the former and implicitly appoints the private sector with value-added activities and engagement in market competition. In tension with this, the chapter argued that the regulatory framework for information, of which IP is one important part, limits the extent to which the government can engage in market actions. Thus, the specifc characteristics of the institutional and legal architecture of the government of the U.S. are fundamental in shaping the construction of the (geographic information) economy in this country. Information markets, however, require a great degree of institutional as well as technical coordination. In this context, integrating various mechanisms of interoperability allows for the aggregation and standardization of information from different sources and facilitates their circulation for commercial and non-commercial purposes. As this chapter has argued, two forms of interoperability—legal and technical—combine to regulate the production of knowledge and digital innovations in the U.S. geographic information market while simultaneously defning the spaces where informational goods can circulate, whether they can be monetized, and the range of potential applications and secondary products.

The geographic information economy in the U.S. is characterized by a coexistence of diversity (of regulations, conditions of production, relationships between state and market) and coherence, which is bridged through instruments such as the use of common information formats produced by the government (e.g., the Census Bureau's TIGER format) and their integration with proprietary formats (such as ESRI's shapefle). These technical developments, aimed at maximizing the distribution and use of information, are loosely regulated through the development of cross-scalar and multi-sectoral initiatives (such as the National Spatial Data Infrastructure) aimed at developing standards, but whose relative laxity in enforcing a single set of technical prescriptions benefts the development of fexible solutions that can be mobilized for the marketization of geographic information. More generally, the arguments developed here help explain the role played by specifc confgurations of legal regimes, technical standards, and processes of knowledge generation in the construction, regulation, and maintenance of information markets. This perspective, in turn, can be deployed to understand the geographic dimensions of developments as diverse as the European Union's efforts to create a "Digital Single Market", the monetization of personal information on the Internet, the global emergence of markets for new kinds of informational assets such as Non-Fungible Tokens and cryptocurrencies, and other formations that characterize the continuously expanding global digital economy.

#### **References**


**Luis F. Alvarez León** (Ph.D., UCLA 2016) is Assistant Professor of Geography at Dartmouth College. He is a political economic geographer with substantive interests in geospatial data, media, and technologies. His work integrates the geographic, political, and regulatory dimensions of digital economies under capitalism with an emphasis on technologies that manage, represent, navigate, and commodify space. Ongoing research projects examine the geographic transformations surrounding the emergence of autonomous vehicles and the industrial and geopolitical reconfgurations resulting from the proliferation of small satellites.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 8 Thinking about Cyborg Activism**

**Nancy Odendaal**

The notion of the "smart city" has permeated policy discourses internationally in the last 10 to 15 years. As a concept, it refers to an ideal, a model of a connected urban system, enabled through digital and new technologies. Those engaging in academic and policy debates on the concept tend to fall within one of two camps: frst, those offering a deeply skeptical critique, highlighting the unevenness of "smart's" implementation and its function as a problematic politics discourse; and second, those believing in technological determinism and by extension optimism, which underpin "success stories" of the deployment of new technologies in urban settings (Aurigi & Odendaal, 2021). This applies to literature on smart cities in Africa as well. Driven largely by multinational engineering corporations and technology industry partners (Watson, 2014), the idea's implementation on the continent tends to be skewed towards largescale infrastructure investment projects, outside city centers. This results in isolated enclaves of wealth that juxtapose the dilapidated infrastructure and largely informal urban environments that typify many African cities. In many ways, the smart city initiatives of many of these countries potentially exacerbate spatial inequality.

More recently, researchers working on the global South have been more interested in how digital tools interface with the livelihoods and survival strategies of the poor majority of city dwellers (Datta, 2018; Guma & Monstadt, 2021; Odendaal, 2021). In this chapter, I explore the extent to which members of social movements in Cape Town, South Africa, have harnessed and deployed smart technologies to oppose socio-economic polarization. Building on previous researchers' explorations of cyber and data activism (Gutiérrez, 2018; Milan & van der Velden, 2016),

N. Odendaal (\*)

School of Architecture, Planning and Geomatics, University of Cape Town, Cape Town, South Africa e-mail: nancy.odendaal@uct.ac.za

I study what the components of such strategies are. Generally regarded as one of the most unequal cities in the world, the second largest South African city has a history of social activism and progressive urban politics, worthy of exploration within the context of the platform economy. My emphasis is on how the incorporation of data and digital tools, such as the use of social media, contributes to knowledge generation practices that juxtapose more conventional forms of data representation.

In South Africa, the redress of apartheid inequalities through spatial integration and socio-economic development is largely driven by municipalities. Much of this is informed by evidence-based thinking and governance paradigms rooted in quantifable indicators for performance monitoring. This accords with international trends. International development agencies' use of data benchmarking and governance-by-numbers to track progress in the global South is of relevance in contexts considered "marginalized." The New Urban Agenda (NUA), adopted at Habitat 3 in Quito, serves as a crystalized example of the hope invested in numbers. The emphasis is on cities as places of opportunity and connection, relationally connected across geographies, and hence able to be compared in the quest to learn from one another. Those utilizing sustainable development goals (SDGs) and associated indicators assume a normative base that is universal and quantifable, whereas the authors of the growing literature on Southern Urbanism heavily suggest that comparative endeavors are strongly tied to Northern normative constructs of the "good city" (Barnett & Parnell, 2016).

In dispute, however, are the allowances these benchmarks' creators have or have not made for context and the characteristics of particular geographic localities. The global and national systemic constraints to inclusive and just cities are intrinsically tied to history and place, or as Maria Kaika (2017) puts it: "The failures of the past have made us more savvy and more knowledgeable. They should have also made us wise enough to stop claiming that global socio-environmental equality, social welfare or value creation can be reduced to indicators" (Kaika, 2017, p. 6).

Numeral indicators are not valueless, as shown in the "masking" work that numbers do in the name of transparent governance. Indicators and associated benchmarks signal consensus on "what counts and what doesn't"—what could be considered indicative of progress. Signals of "dissensus" are perhaps more adept at capturing "what is not working" through insight into confict and disagreement (Kaika, 2017). By focusing on what is lacking, one can shine a spotlight on the dysfunction of urban systems and governance, allowing the cracks to emerge.

This can, of course, hardly appeal to state decision-makers—accordingly, oppositional data-driven initiatives tend to evolve only in response to crises, dramatic policy interventions, or major events. Studying the Arab Spring, for example, can reveal the performative dimensions of Information and Communication Technology (ICT). In Egypt and Tunisia, social media played an important role in infuencing key debates before both uprisings and assisted in spreading democratic messages beyond the countries' borders, both during and after demonstrations (Howard & Hussain, 2011). ICT was part of broader heterogeneous networks that included television and radio and built upon existing social and kinship capital (Allagui & Kuebler, 2011). The media's power was no longer vested in the state alone, enabling distributed voices and visual content that potentially challenged offcial discourses. These multi-layered, technology-mediated exchanges are subject to context, differentiated access, and existing social networks.

Moments of crisis that gel oppositional forces, can also spark what South African anthropologist Steve Robins (2014a) calls "slow activism". In examining the work of social movements that have challenged the City of Cape Town's claims to propoor service delivery, he explores the combinational use of new media together with social network connections that date back to the Apartheid struggle, in enacting an ongoing oppositional voice and keeping critical social justice issues in the public imagination. Voicing dissent through repackaging of data and documenting the "everyday" is an important strategy in challenging the state consensus. Enabling such work to become part of the public discourse speaks to an epistemological shift whose supporters value the experiential dimensions of the urban: contingency, emergence, and embodied testimonies used to counter aggregated offcial narratives. The legacy of mobilization and struggle politics has impacted the ways through which civil society organizations engage the state. In the last decade, however, actors have increasingly used social media and digital platforms as tools of mobilization and information dissemination. In this chapter, I argue that this is not a mere extension of the suite of tools available to such groups, but that it constitutes a form of knowledge production and social compact that is more attuned to human experience, and therefore more embracing of the experiential dimension of the urban realm. A core technique used in this regard entails storytelling, or the everyday representations of urban experiences. I here explore the relationship between storytelling, African urbanism, and urban activism by applying the concept of a cyborg.

The notion of "cyborg activism" speaks to a hybridity that typifes digitally informed social action. The "cyborg" motif—as an entity that integrates and transcends the visceral boundaries of the body shaped by biology—provide a useful frame for understanding data-mediated activism. The intimate exchange between the algorithm and human and urban space entails a reassembling of the individual as containing elements of human and machine, nature and technology (Asenbaum, 2017). In thinking through the elements of a technology-mediated activism, the usual "binaries" of nature versus technology, identity versus anonymity, and public versus private are reconfgured to allow for a blurring of the reason-emotion divide (Asenbaum, 2017). "As the private pervades public spaces, the modern separation of rationality, objectivity and cool-headed politics, on one hand, and emotion, passion and affect, on the other, is reconfgured" (Asenbaum, 2017, p. 5; Emphasis in the original). The use of spectacle is therefore not only a media strategy to shine a dramatic light on injustice, but also "choreographies of assembly" that become trending places, which together with devices such as hash tags become magnetic, heterogeneous assemblages (Gerbaudo, 2012, p. 12). The emotional tension created through social media acts as a different kind of aggregator from the numeral ilk, constructing common symbols and momentary unifed identities from diverse participants, or what the activist Zackie Achmat, in Robins' (2014b) portrayal of Cape Town's Social Justice Coalition refers to as a "moral consensus". Thus, the experiential dimension is key to not only mobilizing consensus and assembly, but also creating a data of dissension with which one may combine the "slow burn" of monitoring, reporting, and information processing with emotionally charged representations of suffering. In appropriating technology, emergent qualities of technology are enrolled as time and situation demands.

The question is: Are these largely feeting assemblies situationally focused, or do they represent an epistemological shift where the experiential and emotional dimensions or urban data can shift public discourse, or essentially what counts as knowledge and truth? In order to explore this question in a situated way, it is necessary to understand how the harnessing of new media in activism is not only situated in the (South) African urban context, but also informed by it.

I examine two cases here, which I then abstract through the notion of the cyborg. "Ndufuna Ukwazi" is a civil society organization whose members focus on inclusionary housing in Cape Town. "Cape Town Together" is a network of community action networks (CANs) whose members mobilized resources on a neighbourhood scale when extreme lockdown measures were implemented in South Africa during the coronavirus pandemic in 2020. The aim was to address food and income insecurity amongst vulnerable groups.

My empirical work comprises a total of 5 interviews with key actors in the two organizations portrayed, initially held in 2018, then updated in 2020. I have complemented this with a review of grey literature and 34 news articles on the two cases, done between 2017 and 2021. The former comprises the court documentation submitted by Ndufuna Ukwazi in its "Reclaim the City" campaign, hardcopy pamphlets distributed in conjunction with its cyber activities. In the case of Cape Town Together, I scrutinized the network's "Ways of Working" guideline document together with academic outputs from key actors referred to in the analysis of the case. An important part of my work was analyzing organizational discourses in Ndufuna Ukwazi's "Reclaim the City" campaign of 2017, done through analysis of the organization's tweets during the campaign, and surfacing the storylines that underpin its Instagram posts in 2021, during its initiative to address inclusionary housing on vacant publicly owned land.

In the following section, I review work on African urbanism, and explore the nexus between infrastructure studies and platform urbanism to explore the relationship between technology appropriation and civic activism. After discussing the two case examples, I conclude with the implications for future research and thinking about social activism in the contemporary African city.

#### **Engaging Smart Urbanism in African Cities**

There is an evident tension between the visual imaginaries of the smart city in Africa and the many qualities that makes up the "real" city. It essentially translates into discrepancies between visual narratives and everyday experiences. It follows that in order to understand the digitally enhanced city, one must understand the dynamics of African urban spaces. Increasingly, African urban scholars from the North and South are calling for a global perspective with which one can recognize African urbanism as possessing embedded qualities, not simply incomplete versions of the ideal developed ("Western") city, shining an investigative lens on the many approaches and strategies employed by a diversity of stakeholders in the continuous redefnition of urbanity. In what Simone and Pieterse (2018) refer to as an age of "dissonance", the boundaries in urban Africa between the global and the national, between the public and the private, and between the formal and the informal are increasingly blurred. Africa has always been global and has infuenced the rest of the world as much as it has been shaped by it, producing different modes and models of "worlding" (Robinson & Roy, 2016) that are also distinctly local. Understanding the African city, therefore, requires engaging with the substantive qualities of its spaces, whilst recognizing trends such as informal urbanization and inherent propensity for on-the-go problem solving and livelihood strategies. I argue elsewhere that understanding the "everyday" or interstices of urban life, in relation to the appropriation of technologies, demands an engagement with livelihood strategies and urban culture (Odendaal, 2021). A core part of this is an engagement with the use of smart phones and increasingly the use of social media and proliferation of digital platforms.

Understanding smart technologies in African cities, in a way that is contextualized and relevant, requires a view that embraces heterogeneity and co-production (Odendaal, 2021). Existing service infrastructural potential is not maximized to effectively facilitate employment and economic growth; moreover, misguided infrastructure investments may constrain mobility and livelihoods. "This is more than simply building new roads, rails, power lines, and telecommunications. It is more than a matter of constructing synergies between the physical, the institutional, the economic, and the informational." (Simone, 2010, p. 29). A view informed by a socio-technical reading of cities is that the situatedness of these milieus require deeper understanding (Anderson, 2002; Philip, Irani, & Dourish, 2012) and that from the heterogeneous assemblages that emerge in well-resourced spaces (Furlong, 2011) as well as in cities of the global South (Guma, 2019; Lawhon, Nilsson, Silver, Ernstson, & Lwasa, 2018) one may conclude that human ingenuity, reinvention at the margins, and continued appropriation require one to view urban change as iterative and experimental (Odendaal, 2021). A focus on everyday practices serves as a conceptual inversion and foregrounds people as infrastructure (Lawhon, Ernstson, & Silver, 2014; Simone, 2004). Spaces for learning and creativity can then be uncovered through recognition of the materiality of the digital and how the interface with the everyday micro-level "sociotechnical niches" encompass small networks of actors that add new technologies to the agenda, promoting innovations and novel technological developments.

Central to a socio-technical reading of cities is an emphasis on agency—on problem solving, using platform technology, towards livelihood enhancement and voicing dissent. In South Africa in particular, activism and mobilization are deeply engrained in urban cultures. Emerging also are techniques that combine technological tools with a more traditional array of collaboration strategies to maximize the breadth of participation and deepen connection. Utilizers of this mode of activism rely on rational strategizing, honed through the anti-apartheid movement, and the use of technology to share subjective interpretations of issues and human experience. In the following section, I expand on the other elements of livelihoods in African cities, and the digitally enhanced mobilization tactics used to address exclusion and survival. I aim to uncover what these strategies contribute to urban inclusion, and how they differ from past interventions from the bottom up. This partially relates to the question of what qualifes as data and the nature of information. In many ways, I am considering the stories of the everyday, but I am also posing an epistemological question as to the nature of knowledge in urban practices.

#### **The Politics of Dissensus**

Understanding the scaffoldings of such dissensus, the means through which it is communicated and represented, provides insight into strategies of knowledge production that thinkers can follow to generate a more accurate representation of urban life. It necessitates technology appropriation but it also implies an aspirational shifting of policy discourses. Furthermore, I would argue that it entails conveying an experiential dimension to sharing aimed at evoking an emotional response. Unlike "cold, hard facts," using strategies such as spectacle or dramatic portrayals of "everyday" suffering taps into the public imagination. Robins (2014b) documents what has become known as the "poo-protests" in Cape Town, where (amongst other public actions) activists emptied human waste onto the concourse of the Cape Town International Airport to draw attention to the adverse sanitation conditions in informal settlements on the city's fringes. Here, actors transmit information through visual media, hash-tagging in order to link events in real-time and draw the attention of the mainstream media. The spectacle's power lies in elevating issues to policy discourses. "Prior to the Toilet Wars, the shocking sanitation conditions in informal settlements seldom made it into the mainstream media or national political discourse" (Robins, 2014b, p. 480). Much of this is enabled through a free press and a context that allows for civil society activism. Where such organizing is not possible without repercussions, digital media holds a meaningful ability to enable network relations across geographies. In his work on the Cuban blog Voces Cubanas, Kellogg refects on the use of narrative technologies in "enabling nodes around which relationships form and alliances are built . . . Within networks, narrative technologies allow new relationships with other actors" (Kellogg, 2016, p. 44).

The work that technology does in concert with human agency forms part of alliance building and network making. Kellogg's Cuban example or Robins' South African case study are not the only instances of it challenging the state's control of knowledge, but it is productive of "alternative discursive spaces and subversive narratives" (Kellogg, 2016, p. 23). It is performative and experiential. The power of spectacle is that it evokes an emotional response that lingers in the public imagination and carries political currency. The sway of the "slow burn" of ongoing networking and mobilization is that perpetually builds alternative narratives. Using a sociotechnical lens on his work in Cuba, Kellogg (2016, p. 33) writes of the heterogeneous range of actors that contributes to networks becoming "cyborg entities, homeostatic assemblages of heterogeneous techno-social elements with porous borders and radical political motivations". Here, the written narrative, produced in blog form, is an actant that contains fexibility and fuidity, potentially shaping political discourse.

#### **Dissensus in Cape Town and "Moving at the Speed of Trust"**

The hard lockdown imposed on March 26th, 2020, following the frst coronavirus cases in South Africa, was well intentioned. Focused on protecting lives and public health services, the government acted swiftly and decisively. Yet it also displayed a reckless lack of understanding of how food systems work in marginalized spaces in South African cities.

The impact of the lockdown was that many people were unable to earn an income to buy food, informal traders were unable to sell food, and school feedings schemes were closed. This resulted in a food crisis that surfaced the vulnerabilities of the wider food system.

Shortly before lockdown, a group of medical researchers, public health specialists, and activists formed a collective entitled Cape Town Together (CTT) to intervene in what was anticipated to be a public health and humanitarian disaster. Anticipating a "command and control response" from the state, the group understood the shortcomings of a top-down intervention and the impact it could have on marginalized communities. The collective experiences and histories of this group's members included the Ebola response in West Africa as well as the "Fees must Fall" movement at South African universities. These experiences provided lessons on the limitations of a hospi-centric approach to resisting the virus, and the effcacy of decentralized mobilization using digital platforms.1 "Community intelligence—in other words, the tacit, situated knowledge arising from and produced within lifeworlds and lived realities—cannot be compartmentalized into a standard operating procedure" (Van Ryneveld, Whyle, & Brady, 2022, p. 2). CTT's pioneers developed an online toolkit to encourage neighbourhoods to self-organize into autonomous, local community action networks (CANs). From an initial 14 such networks, 170 CANs had developed across the city within 2 months (Van Ryneveld et al., 2022).

No two CANs are the same: They are developed in accordance with the specifc characteristics of their focus neighbourhoods. The CANs build on existing mobilization energies, but with sets of values and tools intended to enable self-organizing, neighborhood-level, and community-based responses to Covid-19. There is no hierarchy or central organizing structure; CANs are de-centralized, adaptable, and collaborative, with each unique in its composition of members and representation

<sup>1</sup>Dr. Leanne Brady, personal communication, September 15th, 2021.

from other organisations, such as faith-based groups or street committees. There is also a temporal fexibility: "New thematic CANs emerge organically on a regular basis in response to emerging needs, and old ones disintegrate as the energy of the group is needed elsewhere" (Van Ryneveld et al., 2022, p. 2).

One of the key principles that informs CAN functioning is the notion of "moving at the speed of trust": Members see action as motivated by human relationships and social solidarity rather than by pre-conceived project plans. These values are represented in CTT's "Ways of Working" mandate. New forms of community organization emerged: connecting across historical spatial boundaries, with a solid set of principles where connection is the basis of doing and sharing. Storytelling as well as building relationships and non-partisan partnerships together with adaptive leadership form the bedrock of what is now considered the CAN "movement."

The organizing model is simple and underpinned by digital platform synergies. An online portal allows activists to register a new CAN or join one in proximity to the applicant's home. When the movement started, new CANs were formed through CTT-enabling connection via WhatsApp and email, based on similar neighborhoods and interests. The CAN "starter pack" provides a resource on Covid-19, safety protocols, and guiding principles for working in a non-hierarchical and decentralized way. These "ways of working" were formulated as a frame for interaction, many of them to avoid the pitfalls of social media and online communication. Digital organising was key, but as Leanne Brady, one of the pioneers confrms, the digital divide in relation to data costs and access to smart phones was a defnite constraint. The CANS collectively spent close to R 100,000 (approximately USD 6600)—digital organising was key, with platforms forming a core part of the organizational infrastructure.2

CAN members appropriate smart features, mainly in the form of social media, in accordance with local needs, but the function of the WhatsApp group is central. How this proliferates into other forms of "smart" is dependent on the defnition of local priorities. Knowledge dissemination refect place-based histories and resources, with many using the networking capacity of individuals to overcome constraints to movement. This social network of networks is a juxtaposition to the one-size-fts-all state response. Members of each network formulate their own analysis of what the most pressing issues are and, using local resources, design selforganizing neighbourhood initiatives. The "ways of working" frame is critical to ensuring that misinformation does not spread, and that a "calling out" culture remains avoided. WhatsApp provides a bounded network model that ensures groups are representative of joint interests. Facebook provides a visual and storyboard platform that participants view as more widely accessible and useful for keeping the broader public informed whilst also providing a starting point for new recruits. Here, the roles of moderator volunteers has been essential to ensuring the space is safe from trolls and misinformation pedlars.3

<sup>2</sup>Dr. Leanne Brady, personal communication, September 15th, 2021.

<sup>3</sup>Dr. Leanne Brady, personal communication, September 15th, 2021

CAN members adopted a partnering model to enable linking within communities but also across neighborhoods, refecting the agility of this modular approach. At the time of writing, 12 such pairings existed. The result has been sharing of information and ideas, two-way learning with food relief being a major emphasis. A radio interview with one such pairing reveals collaboration that included: fact checking fake news, transferring mobile phone data, and topping up electricity service payments.4 Mention is made also of partnering with Uber drivers to enable food delivery within lockdown restrictions.

The CAN initiative is a continuation of a culture of mobilization that was refned during anti-Apartheid struggles, especially in the late 1980s. It is also informed by more recent struggles in which activists made use of technology, such as the Treatment Action Campaign of the 1990s (Grebe, 2011). South Africa's post-Apartheid landscape is replete with reconstruction discourses whose participants place great faith in the state to enable more inclusive and representative cities. Many feel—as evident in the number of service delivery protests and counter movements—that the state has largely failed the poorest members of its population. Given the country's turbulent history, the focus on social justice is apt and understandable and the resort to activism a natural progression. Cities were battlegrounds of (often violent) struggles against the Apartheid state during the late 1980s, activists overwhelmingly focused on the material inequalities represented by skewed infrastructure provision. These struggles continue today, and whilst mobilizers rely on established activism networks that were forged in late Apartheid years, the digital overlay has brought with it a form of engagement that is a hybrid of online and offine strategies, speaking to a more differentiated public.

With my second example, I focus on Ndifuna Ukwazi (NU), a group of activists who use research and strategic litigation to campaign for justice and equality in poor and working class communities in Cape Town. Whereas the CAN movement was precipitated by the pandemic, NU's activities were galvanized by an event that surfaced many of the tensions that exist between private land markets and the need for affordable shelter.

In late 2015, a former public school, named Tafelberg, located in the Atlantic Seaboard suburb of Sea Point—a high-density, middle- to high-income, mixed-use neighbourhood on the oceanfront—was advertised for sale to a private education company. The public advertisement mobilized the protest of domestic workers and low-income earners in Sea Point, who argued that the city should follow through on its stated policy intentions to deliver social housing on well-located publicly owned land in the city, not sell it to private concerns. Seasoned community organizers teamed up with local interest groups in staging a campaign entitled "Reclaim the City" (RtC), assisted by NU. The activists' primary aim was to stop the sale of the school site. It subsequently evolved to include two campaigns. The frst was continued pressure on the municipality to deliver affordable housing on inner-city state land, beyond the Sea Point site. The second followed the eviction of tenant families

<sup>4</sup>Cape Talk Podcast: Lunch with Pippa Hudson, April 6th, 2020.

in a gentrifying neighborhood called Woodstock, also near the CBD, with campaigners demanding that the City of Cape Town (CoCT) provide temporary accommodation in the area.

The campaigners oscillated between a steady process of documentation and legal work and digitally augmented public events and interventions. The employment of "spectacle" in enabling emotional connection through sharing of personal experiences is a signifcant element of the campaign's public profle and essentially defned its origins. The campaign's tagline "Land for People not Proft" soon became a familiar feature in public spaces in Sea Point, following the frst protest march on March frst, 2016. Activists augmented their ongoing protests at the Tafelberg site with social media. Examining the campaign Twitter feed at the time, I found that a signifcant feature is the personalization of key actors implicated in the sale: the provincial premier, the frst judge appointed to hear the court case where NU challenged the sale of the site, the leaders of the RtC campaign, national and local politicians. As is the case with social media, the discourse became uncomfortably personal at times, yet those waging it succeeded in creating the storylines necessary to convey household struggles against gentrifcation and the follies of property capital.

The sale of the Tafelberg site was suspended as a result of the public pressure facilitated by RtC and Ndifuna Ukwazi, the organizational arm of the campaign. A call for architectural proposals has subsequently displayed the technical viability of social housing for the site. The campaign worked. In August 2020, the Western Cape High Court (the provincial court in Cape Town) set aside the Western Cape Provincial Government's sale of the property to a private buyer for R135 million, based upon the argument that the province and the City of Cape Town have a constitutional duty to combat spatial apartheid.5

Yet the systemic issues that led to the campaign's creation in the frst place still need to be addressed, and what was initially a protest against the sale of the one site became an ongoing campaign for the reallocation of centrally located public land for social housing. Here, RtC activists took the campaign's experiential dimension further with the subsequent "symbolic occupation"6 of two vacant public buildings in the city. The location of these properties is signifcant. One is located on the fringes of the Victoria and Alfred Waterfront, a mixed-use shopping precinct combined with high-end residential development and hotels. The latest high-profle addition to the precinct is a grain silo conversion by London-based Heatherwick Studio, which includes a luxury hotel and the location for the future Zeitz Museum of Contemporary Art Africa (MOCAA), opened in late 2017. The second site occupation is in Woodstock, a vacant hospital in close proximity to the galleries, restaurants, and design quarter that defne the neighborhood's gentrifcation.

<sup>5</sup> I was an expert witness for the application, arguing the case that the City of Cape Town and the Western Cape Provincial Government had not addressed spatial apartheid.

<sup>6</sup> https://stopthesale.net/occupation/

The choice of sites is strategic but also indicative of the value of focusing light on the spatial paradoxes that have come to defne Cape Town. This is evident in the choice of infographics and mapping shared on social media, the visual depiction of the city's glamour in contradiction to the hardships of those on the edges, and the personal stories. More recently, the campaigners have focused on Airbnb's expansion into the city and the location of short-term rentals. Here, appropriation of data's power is most obvious in the form of online maps, used to illustrate the impact on land and property markets. There was no substantial outcome to this part of the campaign, unlike in other parts of the world, where AirBnB was either restricted or banned (Cocola-Gant & Gago, 2019; Van Doorn, 2019). The visual representation of the extent to which the majority of Capetonians are unable to afford well-located housing did, however, strike a chord. The city and provincial governments have since formulated inclusionary housing policies that acknowledge the skewed nature of the city's property market.

In addition to the spikes in activity that mark the milestones as well as entry points of connection to the campaign, the various actors engaged in an ongoing mobilization process that formed a "slow burn" of diverse activities. The most signifcant, politically, was the legal campaign to stop the sale of the Tafelberg site, as mentioned above. Later, activists waged an on-and offine campaign objecting to zoning proposals for the Somerset Precinct near the Waterfront (and containing the activist-occupied property) to allow for more social housing. The latter is indicative of the contest of numbers that played itself out as occupancy ratios and foor space allocations were debated. Yet selective representation of data is evident in both camps! RtC actors are as astute as those of the CoCT in ensuring that the numbers "dance" in ways that support their arguments.

A signifcant campaign aim was raising public consciousness. This included information sharing in public spaces, regular editorial content by activists and supporters, and targeted alliances with stakeholder groups such as the Sea Point Jewish community (an established interest group in the neighborhood) as well as other state agencies and property development interest groups. As an alternative to the usual economic discourse that favors an unfettered property market, the message that well-located social housing makes economic sense for households and the city represents a signifcant shift in public consciousness. This was later refected in an inner-city housing plan, launched in July 2017, whose drafters allocated a number of well-located sites within the city core for social housing. More recently, in 2021, the provincial government launched its own inclusionary housing policy.

Whilst both examples cannot be portrayed as smart city models (I would argue no such thing exists) and a deeper interrogation will no doubt reveal some inconsistencies and inaccuracies, they nevertheless represent impressive interventions, the activists of both achieving signifcant shifts in public awareness in their respective short time spans. Both examples comprise an array of on—and offine strategies that range from populist representation of information to a technically astute interrogation of commonplace "truths" regarding property markets and the space economy of the city, in the NU example. The CANs became known within and beyond the City of Cape Town for enabling an effective intervention during the food crisis

that resulted from the initial hard lockdown during the early days of the Covid pandemic. A signifcant part of both sets of interventions is the foregrounding of the "everyday" experiences of city dwellers in the face of gentrifcation, food insecurity, and, some would argue, state inaction. Both collectives wove emotional, technical, and political "stories" into their narratives and performed the ongoing labour of legal, media and policy engagement, representing a fascinating entry point into what cyborg activism may look like and the potential it holds for affecting change.

#### **Conclusion: Vestiges of Cyborg Activism? Or Renewed Conceptualization?**

In South Africa, dashboard urbanism coincides with a managerial local government system, conveniently poised to use the language of indicators to support market-led urbanism, despite policy discourses whose participants claim otherwise. The normative and political work achieved through numbers as well as the decontextualized representation of market "truths" and the benchmarking that often accompanies it are symptomatic of the confuence between technology innovation and governance frames. My aim in this chapter was to present an alternative approach to telling "truths" in relation to interventions in the public realms that are normally within the ambit of the state.

Refecting on both case examples, I can isolate several common features. One is the hybrid nature of the collectives—or to use the relational term, assemblages of "traditional" and digital media are used to inform the public and enable reach. A further function is the means through which narrative continuity is achieved, with the best features of each digital and analog tool deployed to frame the problem, thus harnessing different functionalities of platform elements in relation to target audiences and associated activist outcomes. In the NU case, for example, activists combine capturing the public imagination through on-site theatre and online video and cartoons with interactive workshops on legal frameworks that inform housing and spatial planning. Capturing everyday realities and stories with moderators holding the space to ensure adherence to agreed-upon values speaks to an opening up of activist possibilities. The centrality of normative values is essential in this regard, especially in the case of the CANs, where one must consider the diversity of spatial contexts and incumbent communities.

An important feature of both examples is an engagement with qualities of place, the activists combining a mix of WhatsApp, Twitter, Instagram, Facebook, and radio with on-site spectacle and staged protests at opportune moments. The actions of both organizational entities also had physical impacts through the establishment of community gardens, cloud kitchens, distribution of food tokens, and occupation of vacant public buildings.

Campaigners portrayed the experiential dimensions of urban poverty together with the quantitative work required to lend further legitimacy to their claims. The fexibility of these cyborg hybrids speaks to the emergent and embodied nature of contemporary urbanism. Enrolling the experiential dimensions of urban life into the knowledge domain not only provides an alternative to data-driven, dashboard urbanism; it expands and deepens the discourse terrain of urban policy. In some ways, it differs little from the city itself: a little messy, sometimes misguided, but real and probably closer to the truth than the numbers claim.

Activists combine numeric evidence and visual representation to speak to both minds and hearts.7 By appealing to people's sensibilities of what is "decent" and using data discerningly, NU, for example, creates nodes of interest that enrol combinations of stakeholders not usually in agreement. In a city as divided as Cape Town, this is very poignant. The use of Instagram opened up space for this unexpected engagement, with activists putting careful thought into how to "land a message" whilst preserving an accurate digital archive.8 The capture of a digital archive in combination with the facts that drive the activists is critical to NU's communication campaign.9

In discussion with NU and CTT, mention was made of how campaigning is also infuenced by platform market trends. The inter-operability between Facebook and Instagram enables activists to integrate campaign messages and expand their audiences. They utilize Facebook for visual media, with the commenting function proving particularly useful for understanding oppositional stances (through trolls for example) and gauging the public imagination in general. Facebook is a space to engage specifc audiences with evidence and determine impacts.10 Its free data function is also more enabling.

Nevertheless, both CTT and NU acknowledge the danger of trolling undermining the effcacy of Facebook sites. CTT CAN members found it essential to use the organization's "ways of working" mandate to publicize the parameters of communication, with a dedicated team of moderators keeping an eye out. Both organizations stressed the importance of storytelling within boundaries determined by moderators.11

Both CAN and NU activists reported WhatsApp as the most useful and effective platform. As a bounded system of groups and broadcasts, as well as sharing and editing functions, there is enough guarantee of privacy yet a growing capacity to expand networks. NU uses WhatsApp for sharing information and press briefngs in pre-selected journalist groups. The interoperability and internal architecture of the platform allows for social connection in a controlled fashion. The editing functions allow for personalized messaging. According to Brady, the CTT template for sharing and associated values captured in the "ways of working" mandate helped build trust on WhatsApp as well. Interestingly, NU activists summarized Twitter feeds on WhatsApp for its organizers, as well as briefng them on daily court proceedings during the Tafelberg hearing.

<sup>7</sup> https://stopthesale.net/occupation/

<sup>8</sup>Personal communication, Kyla Hazell, Popular Education Offcer, Ndifuna Ukwazi

<sup>9</sup>Personal communication, Kyla Hazell, Popular Education Offcer, Ndifuna Ukwazi

<sup>10</sup>Personal communication, Kyla Hazell, Popular Education Offcer, Ndifuna Ukwazi

<sup>11</sup>Personal communication, Dr. Leanne Brady, Cape Town Together.

Moving from the personal, the emotional, to the immediately spatial, physical realm, eventual engagement with the broader policy realm is a feld that deserves more attention. The blurring of boundaries between subjective and objective, experiential and factual, cultural and the policy environment provides for experiential engagement. It also has a strategic impact around framing and alliance building. Unravelling these assemblages of tech, community action, and physical expression, within their place-based contexts, provides a useful reminder of the contingency of technology innovation.

Nevertheless, my discussion of these two case examples comprises implications for three facets of city governance. As a challenge to city administration, NU activists have revealed the disjuncture between policy discourses and implementation, whilst using storytelling and online tools to make the implications of public plans and policies clear to the general public. CAN members have provided livelihood alternatives to ineffective government initiatives intended to protect neighborhoods from Covid-19 that unfortunately exposed them to extreme food insecurity. As inputs into, and engagements with, urban infrastructure, the use of digital tools, data, and social media offer methods of communication and mobilization. This is not only complementary to the usual material means of negotiating the city; activists also use them as representational tools to highlight inequalities and unevenness with regard to access to public utilities. The socio-technical assemblages uncovered in these examples are indicative of associational infrastructures that include many identity constructs and practices.

Examining these two cases, I have uncovered evidence that the actions of the two organisations have had substantive impacts. There have been shifts in public policy and discourse on housing, in the NU case, and on public discourse on the impacts of the pandemic, in the CAN case. Studying the CAN example, one understands how activists can hold a space for many place-based interpretations of what is needed and where action is required. I would also argue that they create spaces for citizenship in the everyday. The notion of the "cyborg" is valuable in its qualities of hybridity, fuidity, temporal liquidity, and discerning technological appropriations.

Important themes related to the emphasis on agency are worthy of exploration in future research. The frst is the building of narratives, intended both to shift discourse and to inform. The second, and related point, is the experiential dimension that fnds its way into the narrative. The third is the hybrid nature of such collectives, including "traditional" and new media. By discussing two Cape Town examples above, I have striven to provide empirical textures to these claims.

#### **References**

Allagui, I., & Kuebler, J. (2011). The Arab Spring and the role of ICTs: Editorial introduction. *International Journal of Communication, 5,* 1435–1442.

Anderson, W. (2002). Introduction: Postcolonial technoscience. *Social Studies of Science, 32,* 643–658. https://doi.org/10.1177/030631270203200502


**Nancy Odendaal** is a Professor of City and Regional Planning at the University of Cape Town in South Africa. Her research and teaching interests are concerned with three overlapping areas: spatial planning, socio-technical change in cities of the global South, and smart urbanism. Her book entitled 'Disrupted Urbanism: Situated Smart Initiatives in African Cities', published by Bristol University Press, was released in January 2023, and provides a counter to corporate smart city discourses through empirical work in a number of African cities.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 9 Data-Based Frictions in Civic Action: Trust, Technology, and Participation**

**Alison B. Powell**

The contemporary technologies of urban experience include a range of technologies such as "smart" devices measuring traffc levels, air quality or footfall. "Smartness" as a mode of urban design and governance refers to processes through which technologies are embedded and become ubiquitous in cities. "Smartness" tends to keep pace with technological change, with "smart cities" embedding internet technology, data-driven technology and sensor systems as these have become available over time (Powell, 2021). Roche (2017) outlines that enhanced socio-spatial literacy based in practices such as using metrics, judging location, and considering scale might be the result and requirement of a smart city, and suggests that these parallel the operators available in Geographic Information Systems (GIS). This implies that citizen skills and practices should refect or draw upon the logics and framings of smart city management technologies.

These general trends of smartness and optimization also impact on processes of civic engagement: Assumptions that citizens should engage with data, either spatially represented or otherwise, underpin contemporary processes for civic participation (Marres, 2015a; Powell, 2021) framed in terms of the local government's capacity to fulfl a duty to the citizenry of improving effciency of services (Juvenile Ehwi, Holmes, Maslova, & Burgess, 2022). However, as Juvenile Ehwi et al. (2022) identify, a number of ethical issues emerge from the reformulating of complex issues into computable processes. "Smart cities can have a stupefying effect if decision are geared towards effciency at the expense of expanding knowledge and

© The Author(s) 2024 J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_9

A. B. Powell (\*)

Department of Media and Communications, London School of Economics and Political Science, London, UK e-mail: a.powell@lse.ac.uk

understandings of experiences of the city" (Sennett 2018, cited in Juvenile Ehwi et al., 2022). This is particularly signifcant for policy issues that are complex and with broad, long-term impacts, such as responses to climate change.

This chapter examines civic engagement with policy efforts at optimizing for sustainability, looking at how "smart city" policymaking processes can generate antagonistic responses that illustrate a lack of trust in data, and an associated lack of trust in elected offcials and the democratic process in general. The chapter examines oppositional citizen responses to policies aimed at lowering vehicle traffc and air pollution by creating "Low Traffc Neighbourhoods" (LTN) in inner London, UK, investigating how these responses leverage narratives of systemic inequality, distrust and lack of accountability in the face of the "smart" governance strategies. By examining discussions taking place in a Facebook group composed of residents concerned about LTN policies, the chapter reveals the slow development of antagonistic and disengaged narratives in this discussion space, suggesting that smart governance strategies may have severe shortcomings in terms of public values or inclusive planning.

#### **Literature Review**

#### *"Smartness"*

Smartness is both a technological mandate and a governance frame. "Smart" technologies are positioned as tools for more effective control and management of complex urban environments (considered as "top-down" smart urbanism) and as effective means for educating or empowering systems to participate in urban life ("bottom-up" smart urbanism). Top-down smart urbanism focuses on a city as a system (Batty, 2013) and often involves shifting urban planning and decisionmaking towards the embedding of technologies in order to facilitate this: examples include prescriptive analytics for public transport (Wu & Yang, 2017), and databased monitoring of traffc, air quality, noise or congestion, which is often aggregated on urban dashboards (Kitchin, 2016). The entwining of technology and governance means that decision-making power in smart cities can be shaped by technology companies rather than municipal governments (Castelnovo, 2019; Ruhlandt, 2018). As well, the shift towards "platform-based" urban governance, which focuses on collaboration between governments, universities and companies can reposition the role of local government towards that of a "broker" or intermediary (Deakin, 2014). By contrast, "bottom-up" smart urbanism focuses on the ways that ubiquitous technology might enhance the capacity for citizens to participate in urban governance, through structures of participation enabled by platform governance as well as the affordances of digital technology.

Halpern and Mitchell (2022) suggest that smartness is primarily an epistemology rather than a technology. They view smartness, instantiated through a range of emerging technologies, as a mode of life. This mode of life is grounded in

data-driven logics and aimed at "optimizing" certain functions and processes. Optimization, the management or improvement of systemic outcomes within defned boundaries, is a consequence and key component of smartness (Halpern, Mitchell, & Geoghegan, 2017). Donolo and Donolo (2013) argue that governance of "smart cities" requires expanded civic knowledge and greater accessibility of urban data. In many ways, citizens are not only invited but expected to participate in urban governance by interrogating government data, collecting their own data or "providing personal subjective observations, in analysing aggregated anonymized data from their collective networks … and applying expertise from their personal local experiences" (Roche, 2017, p. 662). The expectations of civic participation and engagement with data and the concomitant development of smart city governance frameworks that rely on data at the expense of expertise might intensify inequality.

The promise of smartness has been widely critiqued, both on the grounds that the technical equipment of smart cities creates ideal conditions for intensive surveillance, both through top-down processes of sensing and monitoring and also through bottom-up practices of self-quantifcation including the use of individualised route planning and recommendation systems. One important critique of smartness mandates is the critique of the logic of optimization itself, which draws from computational logics to promise improvements in functionality for data-based systems.

#### *Optimization and Its Impacts of Governance and Democratic Process*

The current logic of smart city development hinges on a logic of optimization. This logic of optimization can be placed in service to different ends—effcient movement of motor vehicles, perhaps, or reduced consumption of fossil fuels within publiclyowned buildings. Many smart city propositions are therefore framed as potential ways to achieve aims associated with sustainability. Sustainability itself thus becomes the object of an optimizing process, measured against success metrics and becoming an object of investment. Critiques of optimization identify how focusing on a narrow-range of data-based indicators may exclude other forms of knowledge and may intensify power dynamics that alienate citizens.

Optimization, aiming to improve certain measurable aspects given specifed constraints, necessarily presumes the capacity to defne those aspects and the means of measuring them, including the defnition of constraint. McKelvey and Neves (2021, p. 97) identify that optimization is a "form of calculative decision-making embedded in legitimating institutions and media that seek[s] to actualize optimal social and technical practices in real time." They identify the extent to which optimization, from its original mathematical defnition as the best solution among multiple options, has expanded to operate as a mechanism of legitimation for governance decisions. As this has occurred, optimization has become a socio-technical practice that defnes relationships, foregrounds certain knowledge and practice at the expense

of others, and defnes power relationships. Halpern and her co-author argue that optimization works from relative, rather than normative principles, making it diffcult to specify the ultimate normative aims of an optimization process. They write, "to optimize is to fnd the best relationship between minima and maxima performances of a system. Optimization is not a normative or absolute measure of performance but an internally referential and relative one" (Halpern et al., 2017, p. 119). Optimizing is therefore always tuned towards a relative improvement of a measurable state. Because achieving this means both measuring and defning out elements not concerned with this measurability, optimization cannot ever be complete. As McKelvey and Neves (2021, p. 102) put it, "the ends of optimization are without end." Logics of optimization can shape what kind of citizen participation is invited or legitimated (Powell, 2021), or what kind of creativity is valorised (Morris, Prey, & Nieborg, 2021). Politically speaking, optimization invites the performance of a calculative mindset which considers what information can be put to use to determine "what is 'best,' 'favourable,' or even 'better'—it not only *describes a process* (for rendering optimal) but also entails a claim (about that which is optimal, or best) … optimization necessarily articulates social, political, or other commitments as well as their ideal or maximal expression" (Stevens, Hoffmann, Florini, 2021, p. 115). As a deep structuring logic lying beneath technological equipment as well as governance procedures, optimization operationalizes smartness, prioritizing effciency and predictable outcomes.

Governance processes within smart city logics also embed logics of optimization, seeking to streamline urban service delivery as well as civic participation by creating space for "co-creation" using smart city resources (Bolz, 2018). Co-creation also implies expanded roles for technology companies, other businesses and academic institutions, which may have different understandings of the signifcance of participation. Critiques of these strategies identify that co-creation may, from a citizen perspective, be tokenistic and technology-driven (Wolff, Gooch, Cavero, Rashid, & Korteum, 2019). Furthermore, these processes fundamentally operate on principles of optimization, seeking to make citizen participation legible, streamlined and predictable, from the perspective of the government as well as its partners. As Marres (2015b) argues, these modes of governance compel participation by directing it towards pre-defned ends or into times, places and communication modes that align with powerful frames.

These processes also embed aspects of what Boltanski and Chiapello (2005) describe as the "project-based" orientation towards social life, which is focused on and directed towards defnable projects. A project-based logic at work in the sphere of governance, for example, drives investments in collaboration and partnerships between cities, businesses and universities (Deakin, 2014), as well as the mobilization of citizens in decision-making (Cardullo & Kitchin, 2019). This concept of governance depends upon partner networks (Pierre, 1999). These project-based or partner-led models change the enactment of working relationships and decisionmaking protocols (Kourtit et al., 2014). Juvenile Ehwi et al. (2022) identify that these changes in governance relationships raise important questions about how civic engagement is performed within smart governance contexts. They note that smart governance strategies for engagement, including strategies such as "hackathons" that depend on citizen engagement with data, appear on the surface to foster inclusivity in creating solutions to urban problems but often fail to do so. These failures stem from the sense that these efforts are sometimes "imbued with predetermined outcomes which run counter to established democratic principles of urban governance" (Obeng-Odoom 2017, cited in Juvenile Ehwi et al., 2022).

The practice of democratic, participatory urban governance is often schematized as a ladder (Arnstein, 1969) or a spectrum (International Association for Public Participation, 2018) of participation or decision authority. In these schemes, increasing capacity for shared decision authority or meaningful participation ranges from the public being informed of decisions to the public being capable of collaboration or empowerment (Nabatchi, 2012). Schematizing participation can also be aligned with attempts at optimizing participation by aligning it with pre-determined goals and outcomes. The prioritization of systematic rather than holistic knowledge creates an environment that privileges forms of participation that align with the forms of knowledge already prioritized within the smart governance environment. These include digital data but also structured forms of evidence that align with perceptions of the city as a system. While keywords related to democratic governance such as "trust" and "accountability" are leveraged within smart governance processes, they are often abstracted in ways that remove experiences of territory or feelings of confict and that create structuring effects that intensify and polarize conficts and differences. This creates some of the conditions for populist, even antagonistic responses to smart governance projects.

This chapter examines citizen responses to low-traffc neighbourhoods, which are policy interventions seeking to reduce vehicle traffc on residential, narrow or non-major urban roads. At issue in this essay is not the policy outcome of LTNs, which is to reduce vehicle traffc and air pollution by creating barriers to entry for motorized vehicles. Rather, it is to the way that a dynamic of data-based optimization frames and shapes opportunities for citizenship, and the way that this shaping intensifes dynamics of antagonism and mistrust that undermine efforts to use participation and consultation to ensure smart governance is trustworthy and legitimate.

#### **Low Traffc Neighbourhoods: Optimizing or Alienating?**

Low-traffc neighbourhoods restrict through-traffc on some roads using barriers, permitting access by pedestrians, bicycles and other non-motorized vehicles, as well as measures that reallocate road space away from motor vehicles such as expanded pavements with seating and bicycle racks, boulevards for cycling, and removal of parking. Low-traffc neighbourhoods are considered in urban planning as one of the lowest-cost measures to address pollution, air quality, climate change, road congestion and low levels of physical ftness among urban residents.

The chapter situates the LTN introduction in the context of the smartness mandate and efforts to optimize participation, refecting on the extent to which these processes attempt to present value neutrality on the part of the government decisionmakers (see Davidoff, 1965 presents results of a thematic analysis of online comments on a Facebook group composed of citizens concerned about the introduction of LTNs in one London borough).

The city of London, through decision-making by the citywide transport authority Transport for London and local borough governments, instituted 101 low-traffc neighbourhood schemes during 2020 and 2021. These were introduced as experimental pilots during the frst coronavirus restrictions, with public consultations beginning in 2021. The broader political-economic background to these schemes involves not only the increasing levels of vehicle traffc on London's roads, the Greater London Authority's commitment to Net Zero and broad public support for reductions in traffc but also a decade of funding cuts to local governments and a number of policies restricting the capacity for local governments to raise funds themselves for these schemes, leading to a dependence on the central state as well as establishment of alternative ways of generating revenue in order to support their public services—including parking and traffc fnes.

The introduction in 2020 and 2021 of Low Traffc Neighbourhoods is an example of prescriptive smart governance—it attempts to nudge or strongly encourage shifts in individual and collective behaviour. It is data-based in policy terms since the zones defned as being suitable for LTNs are defned based on air pollution readings and the density of particular types of roads, and is enforced through "smart regulation" consisting of the use of automatic licence plate cameras that automatically deliver fxed penalty notices to drivers who enter them—a feature which is more inclusive than physical roadblocks but which is also viewed as a mechanism for local governments to generate revenue from these schemes. The schemes are also embedded within data-driven, spatially-oriented frameworks for participation: decisions about which roads to close have been, in some London boroughs, undertaken through participatory online mapping exercises undertaken with cycling and active transport organisations and extended to the public in the early phases. In all schemes, maps and published data (including air quality data, numbers of vehicles on major roads and statistics on the approval of various design options) are distributed, and participation from citizens is encouraged to occur online, through surveys, map annotations and online meetings.

Despite incredibly broad agreement across the UK that climate change is a serious issue (a recent poll suggests 80% of voters are concerned about climate change), and activist and media attention to the poor quality of the city's air, opposition to LTN schemes has been substantial, leading two London boroughs to abandon their proposed plans. Of course, any urban planning scheme inevitably attracts dissenting voices: the question here relates to how these dissenting voices engage with three key aspects of smart governance: the use of data, the generation of trust and accountability, and the overall legitimacy of the planning decisions. The qualities of dissent in this case and in particular the ways that smart governance displaces particular forms of knowledge and hence creates the conditions for divisive politics driven and intensifying difference and inequality.

#### **Methods**

The fndings discussed here are based on a thematic analysis of Facebook postings made between June 2021 and June 2022. The thematic analysis identifed three key themes relevant to the processes of smart governance: a critique of data-driven decision-making, a sense of these policies as socially divisive, and are critique of the legitimacy of local government, which led over time to a shift in the group's discourse towards expressions of populist dissent and, in the run-up to local elections, emerging advocacy for right-wing political parties.

The posts discussed here were posted in a publicly-accessible Facebook group between June 2021 and April 2022. This group has 2500 people and is described as "a diverse group of [borough] residents adversely affected and deeply concerned by the impact of LTN schemes." As this is a closed Facebook group and only accessible to people who express interest in LTNs it is not representative of a range of views. In presenting data here I have tried to represent the range of concerns while protecting the identities of the contributors, who are posting online in they may perceive as private space. This is especially important because the group is a space where I observed shared feeling, especially as it circulates in a quasi-anonymous online space, raises issues of feeling that include feelings of displacement, mistrust, and alienation. The thematic areas discussed here appeared frequently within group discussions. In line with responsible research ethics, I have not included any direct quotations from group members but have instead provided paraphrases of comments that I collected and analysed. Paraphrasing tries to refect as much as possible the style and tone of original postings while removing any identifying information that would permit the re-identifcation of anyone participating in the group. Geographical information is also removed.

The analysis of the discussions in the group follows the broad tradition of discourse analysis, with a focus on interpreting how "the concrete, situated actions people perform with particular mediational means (such as written texts, computers, mobile phones) … enact membership in particular social groups" (Jones, Chik, & Hafner, 2015, p. 2). Discourse analysis focuses on text, contexts, interactions and power. As such, the themes identifed and discussed here connect with one another and illustrate how the anti-LTN discussion moved from critiques of smart governance strategies, including reliance on and use of data to communicate how policy decisions are made as well as the use of consultation as a validation exercise, towards more evocative, affective and antagonistic statements about alienation, government greed and the lack of legitimacy of the LTN schemes. The thematic analysis is set within a framework examining not only what is written and how shared meanings are generated through comment and interaction, but also the social order that this creates and the power dynamics it represents (Chouliaraki & Fairclough, 1999).

The anti-LTN Facebook group provides space for frustration and dissent, while also building up, over time, a discourse and social context that de-legitimizes both the practice of smart governance and the notion of participatory (or even socially legitimate) planning. This poses challenges for the local government, which comes from the Labour party, traditionally associated—especially in London—with radical, inclusive, socially just planning, as well as with the maintenance of democracy. Despite not being tightly organized, consistent messages especially those posed by a small number of regular writers—reinforced a sense of alienation and a weakening of the legitimacy of the local government. In particular, a few of these contributors strongly framed connections between the data used to justify the policy decision, the sense of marginalization expressed by others, and the political ideology of the Conservative party, which had traditionally not had much electoral success in the local area and which had been explicitly leveraging a newly populist identity in the local context. This identity included Conservative party electoral material that explicitly suggested that LTNs encroached on individual freedom and suggested voting Conservative in order to secure freedom from government control. This echoed posts from one of the core contributors to the Facebook group that positioned LTN policies as exacerbating a sense of alienation and inequality.

The group discussion included discussions of other forms of collective action, including the crowdfunding of a legal challenge to the LTNs on the grounds of a failure to comply with equalities legislation, and the printing and distribution of large signs opposing the schemes. Group members described donating money to the legal appeal, and purchased signs and placards for themselves and also for "donation" to other group members living on main roads or areas with high visibility. One frequent contributor (the same one who made political statements) photographed one of their relatives installing the road signs in different locations around the neighbourhood. The group also shared and commented on news—local, regional and national—with relevance to LTNs or to local politics. Many news articles shared in the group come from the Taxi News Network, a taxi drivers' lobbying organization.

#### **Findings**

The three main themes reiterated over the discussion are: a critique of data-based smart governance, a claim that LTNs exacerbate inequality, and a broader questioning of the local government's legitimacy. These unfold in relation to the text, contexts, interactions and power that make them infuential for a discussion of smart governance. Specifcally, the broader framings of power create a space for populist political discussion.

#### *Critiques of Data-Based Smart Governance*

From the perspective of the smart governance context, the anti-LTN discussions respond both to policy-making based on principles of data-based optimization and to the conventional considerations of consultation and how consultation data is employed within smart governance. Some regular contributors to the group,

especially during the early phases of observation, commented on the use of particular forms of data to legitimate the creation of LTNS: This included air quality measurements as well as appeals to COVID legislation requiring increased space on roads. Texts on the sources of data quickly began to include critiques of the intentions of the planners or the exclusion of citizen voice, and the interactions between people posting and commenting moved towards speculation on the motives of the local government. This thread illustrates how the texts and interactions move from data, through concerns about legitimacy and towards evocations of alienation and inequality. This paraphrased conversation thread is illustrative of the role of the group's interaction in positioning data and smart governance:


Participants also critiqued the use of participatory mapping as a consultation strategy, suggesting that the use of these participatory tools was performative rather than consultative:


Consultation is notoriously diffcult. However, the tension between the perceived necessity of participation to validate policy decisions and the generation of data for analysis is clearly obvious to the LTN group participants. Through their comments on the map, they suggested that the local government's data were unrepresentative, and that comments or opposition were being ignored. The mapping platform being used required a two-step online registration. Commenters claimed that these maps did not strongly involve people and were not representing dissent (or if they were, that dissent was dismissed). Group members responded by collecting their own data—largely in the form of photos or videos of gridlock where there hadn't been any previously. These videos and photos were usually accompanied with comments like the one paraphrased above, discussing the speed of car trips taken in the past and how much longer they were taking now. Some videos taken from upper-story windows appeared to show long lines of cars near a primary school.

Another set of posts reported on a volunteer effort to "staff" a newly introduced automatic number plate camera in order to engage the public in critiques of LTNs as well as to help drivers avoid fnes. Through a thread on the group, eight volunteers, led by the politically outspoken commentator, were organized to spend 2 h each standing under a camera at the edge of an LTN zone. The volunteers approached each motorist coming towards the zone and explained that there was a camera installed there that would trigger a fne. The volunteers logged each interaction and reported back all of the conversations to the Facebook group. Most of the interactions were reported as being short and resulting in the cars turning around (often with thanks for helping the drivers avoid a fne) while some were reported as longer conversations about the impact drivers felt about the LTN, resulting in some drivers joining the Facebook group. This intervention demonstrated that the group held the capacity to empower participation (Nabatchi, 2012; Arnstein, 1969) in opposition to, rather than support of smart governance policies.

#### *Alienation and Inequality*

Opposition to Low Traffc Neighbourhoods leverages concerns about a range of inequalities. In September 2021 one of the members of the LTN group undertook legal proceedings against the local government, arguing that the rollout of LTNs using emergency COVID legislation violated their rights as a disabled person. While a judicial review ruled that no specifc violations of the rights applying to "protected categories" of persons (which includes disability), the judge's comments suggest that impacts of LTNs have not necessarily been able to fully include issues of inequality—including not only "protected categories" but other bases for discrimination.

In Summer 2021 the Facebook group discussed this case in detail, and in the period following many posts focused on themes of inequality and discrimination, especially a perceived discrimination against poorer people who (it was argued) were more likely to live on main roads and "boundary roads" at the edges of LTNs and therefore not gain the beneft of reduced traffc. While this claim is not supported by demographic, traffc or air quality data, the sense of having been overlooked, discriminated against and being on the losing end of urban improvement policies was a consistent theme, expressed well in the hashtag #londonisruined used within the group. This sense of the city having been "ruined" by changes to the way vehicle traffc circulate were connected with critiques of class-based inequalities, suggesting that reductions in vehicle through-traffc on residential roads was part of an effort to force ethnic minorities and poor people out of inner-city neighbourhoods. This paraphrased excerpt illustrates this theme:

I completed a consultation saying that there was a lack of consultation for disabled, carers and traders. These schemes only beneft those without a heavily timetabled work life if they have one at all, who wants silence with their morning coffee.

Contributors to the group also shared a documentary flm trailer produced by a flmmaker from another area, whose themes focus particularly on inequality and perceived community division as a result of the LTN schemes. Shots in the flm trailer linger on the physical infrastructure of the scheme, including planters and bollards, with voiceovers saying "they have created a border: there is us over here, and them over there" and "the council is trying to create a division between what they call the 'million-pound house people' on one side and the council residents on the other." This language and visual imagery of the flm was celebrated and discussed in terms of the fnancial beneft of LTN schemes to the local government.

Other posts made claims (in contrast to offcially collected data), that traffc reductions only beneft residents of side streets and displace pollution on to main roads, and one reported reading that real estate listings had begun to include the phrase "inside one of London's exclusive low traffc neighbourhoods" to advertise expensive property. These claims connect with a deeply held frustration about who "sustainable, smart" cities are meant to beneft.

This theme also illustrated the limitations that participants encountered as they attempted to use the formal mechanisms of consultation and legal challenge to foreground their knowledge. In this oppositional, antagonistic mode of governance the knowledge and experience of people need to be positioned in relation to the legal frames and regulatory opportunities provided in contexts where participation is constructed more narrowly. The legal challenge proceeded through the courts through 2021 and 2022, fnally to be rejected by the Supreme Court.

#### *Erosion of Trust and Entry into Open Political Space*

A third cross-cutting theme builds from the previous two, assembling what appears to be a logical connection between dismissive consultation, pervasive inequality and widespread corruption within local government, opening a space where populist perspectives can be perceived as legitimate. By presenting comments on these three themes in succession, members of the group collectively suggest a causality, or relationship between the themes. This is reinforced by the way that group members can add reactions to posts, validating the feelings or sentiment behind them. The most emotive and heavily commented threads within the group focused on elected representatives, including London mayor Sadiq Khan and one of the local councillors. People making posts used creative as well as dismissive language, manipulating the name of the local area using variations of "scam/scum", and modifying the name of the local councillor to include the word "scary". This language play creates the sense of a trusted "insider" culture within the group, operating against the encroaching "outsiders" who might change the way their neighbourhoods' function. Sometimes, this insider/outsider dynamic specifcally referred to the LTN projects as "gentrifcation", contextualizing these projects as forms or aspects of inequality. Another example is this comment:

This could be a life or death issue, so why? So as the so called representative can impose their will on the rest of us! I mean the cycle lobby who believes only themselves are concerned about air quality, using false criteria while relying on delivery services using motorized transport and air travel for their holidays!!!!!

The theme of "life and death" reoccurred frequently, as commentators suggested that the creation and maintenance of LTN schemes were displacing traffc in ways that would "send us to an early grave" as one commentator wrote. This emotive discourse leveraged the idea of survival and inequality as well as the separation between "us" local residents and "them"—an imagined urban elite comprised of bicycle-riding local government members or "young professionals"—wealthy, incoming and disconnected from the existing community, who frequently mention disability, poverty, and long relationships with the local area in their comments.

Contributors to the group were hyper-vigilant about the behaviour of elected offcials and attentive to any potential hypocrisy. When the London mayor apparently drove through a different LTN, furious comments suggested that he could not have possibly legitimately won his most recent election. Commentators also consistently suggested that local government offcials were corrupt, at one point publishing a diagram with lines drawn between the elected offcials and cycling advocacy organizations. In November 2021 one of the group members posted a poll asking how members would vote in the next election—with most people unsurprisingly reporting that they would not vote for the incumbent centre-left party. The traffc restrictions, combined with frustration about restrictions on everyday life as a result of COVID-19 provoked a politicization of group members. This paraphrased post indicates the strength of feeling:

These lies about roads, covid and pollution are false and push an agenda that a few use to better their lives. While the rest suffer. Never would I have complained about road issues until these LTNs came in. This says it all.

Together, the expressions of alienation and the affective and interpersonal quality of the conversation begin to frame the planning process as inevitable, exclusionary and arbitrary (that is, from the perspective of commentators). This creates space for an affective response to the LTN policies, which began to be addressed through sharing political material from the Conservative party. In this area of London, Conservative politicians have never previously been elected, since the electorate, composed of a large number of people in relative poverty or in what was considered the English "working class" did not fnd ideological common cause with Conservatives. However, in the anti-LTN group, participants argued that the Conservatives would be better equipped to address the area's systemic inequalities.

#### **Discussion**

Practices of democratic governance like the ones in place in cities in the Global North depend on participation from citizens. This has been infrastructured (see Marres, 2015a) into participation through a variety of modes: through data-extraction in the service of optimizing urban processes, as discussed above, as well as participation in consultation processes. Increasingly, such consultation processes are also digitally-mediated and digitally structured. Such processes of consultation structure participation towards particular ends—not only the generation of data but the validation of optimizing processes begun through technocratic effort. As DiSalvo (2022) explores in their discussions of involving publics in the development of community services, it is possible to create strategies for participation that capture affective aspects of participation: the feeling of belonging.

For proponents of "smart city" processes involving data-based policy decisions and data-driven modes of consultation, citizen involvement validates and supports these policy decisions, becoming a social infrastructure that also sustains the policy infrastructure, sustaining its potential claims to democratic or public relevance of decisions. In the case of the anti-LTN group, the processes of consultation appear as a fait accompli, with civic action positions either as validating data-driven decisions or, if this fails, employing formal and oppositional mechanisms.

#### *Knowledge Asymmetries*

LTN opponents question the foundations of data and question the relationship between abstract spatial planning and lived experience of territory, which includes habits such as driving as well as driving as a response to disability or work. These habits are associated and aligned with an experience of the particular places in which they work and live, and with the ways that they understand and express their political positions.

Smart governance prioritizes effciency, yet all governance strategies depend on trust and accountability. The trajectory of discussion in the anti-LTN Facebook group suggests that when trust and accountability are reduced to publication of data, and consultation to the performance of requests for comment, a discursive space opens that holds the potential for appropriation by new political forces.

This chapter has discussed how shifts in the exercise of democratic participation intersect with asymmetries in information between different actors, including local governments but also groups of citizens. It suggests that asymmetries in information, and different standards for data and evidence production between powerful and less powerful actors play into dynamics that intensify *antagonistic* rather than *agonistic* frictions surrounding data, weakening the legitimacy of smart governance strategies and opening up space for populist positions. In turn, these antagonistic frictions reinforce the use of prescriptive approaches, including the expansion of the use of "trace" data where consent is not possible. This suggests a need to reposition "smart governance" development in ways that might mitigate these asymmetries and introduce the potential for a broader range of knowledge to become part of governance discussions. This might be particularly relevant for governance structures seeking to create deep involvement in decision-making, beyond the merely consultative. As some work on participatory data governance has illustrated (Micheli, Ponti, Craglia, & Berti Suman, 2020), this can be possible in a data-driven

context. This could include foregrounding opportunities for citizens to defne which data are most signifcant for their knowledge of the city, opportunities for data to be gathered in commons and placed in conversation with data collected in other ways, and for renewed attention to the necessary conficts that also underpin representational democracy.

#### **Conclusion**

Embedding data-based technology into prescriptive policy processes reinforces inequalities and unequal dynamics of power, by limiting reciprocity and therefore intensifying strong feelings—like alienation—that can't be expressed. Without space for strong feelings to become part of a socially validated process, these harden into antagonism and animosity. In the case of the LTN online discussion group, strong feelings motivated citizens to tell stories about their own observations, rendering these more legitimate than offcially-collected data. Since reciprocity was not considered either through the data-driven policy-making process nor through any other parts of the LTN process, opportunities for agonistic disagreement hardened into distrust. This chapter provides one example of what risks to democratic practice might proceed from a narrow focus on data-driven, prescriptive planning alongside a failure to provide opportunities for reciprocity. In addition, other aspects of holistic technology development may need to be combined with opportunities for reciprocity—such as the capacity to reverse decisions, the capacity to consider the interests with which technological decisions are made, and the temporalities of these decisions. The current and accelerating climate and public health emergency requires new organizational approaches and a signifcant amount of social change. Potential for social change should be centred around the capacity to tolerate friction—to acknowledge and accommodate feeling rather than seeking to optimize at all costs. It should also value a wide range of forms of knowledge, practice and experience while also seeking to communicate information that cannot be intuited, in order to reduce the creation of new domains of ignorance. Such reciprocity is required in order to capture the enthusiasm and vibrancy of politics.

#### **References**


**Alison B. Powell** is Associate Professor in Media and Communications at the London School of Economics. She pursues her research interest in citizenship, participation and collaborative practice in multiple ways: from empirical work on urban governance to leadership of projects such as JUST AI: Joining Up Society and Technology for AI, supported by the AHRC and the Ada Lovelace Institute and Understanding Automated Decisions, supported by the Open Society Foundations. Her book *Undoing Optimization: Civic Action and Smart Cities* is published by Yale University Press. This book identifes how citizens engage with the promise of smart cities, and suggests that integrated and systems-based thinking is required to enhance the ethical potential of civic action using technology.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 10 Relational Spaces of Digital Labor**

**Ryan Burns**

#### **Digital Labor and its Limits1**

The recent "digital turn" within disciplinary geography has attended to the sociopolitical and economic foundations and implications of algorithms, big data, smart cities, gaming, the quantifed self, predictive policing, and other digital technologies mediating everyday life. Within this arena, a robust research agenda has investigated the growing digitalization of labor (see, e.g., Scholz, 2013). The distinction between everyday life and work is gradually diminishing, as productive capacities are increasingly hard-coded into quotidian activities bearing little resemblance to colloquial understandings of "work". By extension, the term *digital labor* can be conceived broadly, as encompassing work mediated by digital technologies like mobile phones and crowdsourcing or microtasking platforms (Aytes, 2012; Ettlinger, 2016); temporary, precarious, contract-based gig work (Woodcock & Graham, 2020); posting content on social media platforms (Fuchs & Sevignani, 2013; Mosco, 2017); work in the technology sector (Cockayne, 2016); online content moderation (Roberts, 2019); and many other applications (Jarrett, 2020, 2022). The form of digital labor called gig work, where workers are assigned small tasks, usually as an independent contractor, exemplifes the scale of digital labor: depending on the precise defnition, some have estimated that between 2018 and 2023 the global number of gig workers will have increased from 43 million to 78 million, and that 16% of Americans have conducted gig work (Velocity Global, 2022). Increasingly, users of digital technologies are the source of productive and extractive value as institutions surreptitiously generate value from individuals and groups through smartphone

R. Burns (\*)

© The Author(s) 2024 185

<sup>1</sup>Much of this paper is an adaptation of a working paper of mine (Burns, 2020).

Department of Geography, University of Calgary, Calgary, AB, Canada e-mail: ryan.burns1@ucalgary.ca

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_10

applications and surveillance technologies, even without their explicit awareness of it (Couldry & Mejias, 2019; Mouton & Burns, 2021; Thatcher, O'Sullivan, & Mahmoudi, 2016; Zuboff, 2019). Everyday activities like posting on social media or reacting to others' posts, using a transit card, or flling out a CAPTCHA are now "datafed"—coded into data and stored in databases—in order to produce value from people's and communities' interactions, movements, knowledges, and networks. To varying degrees, research on digital labor has spoken to each of these examples.

With some notable exceptions, though, such research has paid insuffcient attention to the *spaces* of digital labor—where it occurs, where it is recruited, its spatial relations, what sort of spaces it produces, and so on. As I hope to show below, digital labor is currently transitioning to enroll more affective, immaterial, and attentional work, and more than previous labor regimes, digital labor occurs across traditional jurisdictions anchored on state sovereignty. Together, these two transformations underscore the importance of directly contending with digital labor's spatialities. More specifcally, digital labor research often exemplifes one of two limitations. The frst and more common limitation is that the spaces of digital labor are not considered at all. Such accounts might instead focus on the labor relations, transformations of the workplace, and shifting exchange media, but frame the processes without attention to their attendant spaces. Second, when its spatialities are indeed considered, research typically frames the "workplace" as occurring within Euclidean spaces. This abstracts from individuals and (often multinational) relations to the political boundaries of, for instance, the nation-state or sub-national regions. It also relies on a conception of digital labor that is an intentional intervention with the aim of compensation, rather than an often subconscious or immaterial productive practice. Here, I build on the productive work in these areas by directly confronting the question of how we might (re)think the spaces of digital labor.

I argue that not only are the spaces of digital labor important for understanding its relations, implications, and limits, but that they are rooted and expressed in ways not easily captured in Euclidean geometries. A *relational spaces* framework helps address key shortcomings of the ways digital labor's spatialities have been conceived. Relationality can be understood as analytically prioritizing the networks and connections that produce space for particular purposes; it is to think about relations between actors rather than abstracting actors from their socio-political contexts and positionality within global systems. Despite research's important contributions to understanding digital labor, to overlook its non-Euclidean spatialities constrains the ability of research to explain key socio-political processes. For example, legal and regulatory frameworks remain centered, for the most part, on national jurisprudence despite the diffuse (in Euclidean space) nature of digital labor; some work has been done to mobilize regulatory frameworks across national boundaries but leave unquestioned the analytical unit of the nation-state itself. A relational perspective helps us focus on networks, and to see space as produced for labor exploitation, rather than space as a container "holding" discrete acts of *work*.

As digital technologies are increasingly vehicles for intensifying value production and extraction, the questions with which I contend in this article are becoming progressively more imperative. Below, I frst substantiate each of these claims about digital labor, in an extensive review of digital labor literature, showing that most work is either aspatialized or relies on a Euclidean geometrical framework. I then conceptualize non-Euclidean spatial thinking, by drawing on the relational-spatial thinking that has a long history in geographic scholarship. Lastly, I bring these two together by proposing a framework for thinking the *relational* spaces of digital labor.

#### **Geographies of Digital Labor**

#### *Digital Labor as Strategy, Relation, Productive Process*

The emergence of digital labor is part of broader institutional and political-economic reforms of workforce management, labor markets, precaritization, and frm proft strategy (Arvidsson, 2019; Huws, 2014; Zukin, 2020). The increased precarity and shortened temporal scales of digital labor is perhaps best captured by the gig economy, in which workers are assigned small tasks such as delivering food with Deliveroo or SkipTheDishes, or taxiing people with Uber or Didi Chuxing (Chen, 2018; Richardson, 2020). These workers typically have the formal status of contractors rather than employees, which relieves the hiring company of paying for benefts and job security (van Doorn, 2017; Woodcock & Graham, 2020). For Pasquale (2016, p. 314), this deregulated "gig economy is a glidepath to precarity, prone to condemn laborers to insecure and poorly paid conditions". While these labor market transformations are not unique to digital contexts—Peck and Theodore (2012) locate such "contingent work strategies" in broader political-economic reforms related to and stemming from deepening neoliberalization since the 1970s—they have found a particular resonance and enabling mechanism in the milieu of the digital infrastructure of platforms.

Platforms are a key mediator for this digital labor. Srnicek (2017) compellingly links the rise of platform technologies to the proftability crisis of the 1970s that nearly led to global economic collapse in 2008. For Srnicek (2017, p. 42), platforms constitute "a powerful new type of frm" that is "capable of extracting and controlling immense amounts of data" (Srnicek, 2017, p. 6). Platforms enable new deregulated, contingent, precarious labor markets such as on-demand food delivery and ride-hailing services while often simultaneously serving as an instigator of new forms of work (Langley & Leyshon, 2017; Mahmoudi, Levenda, & Stehlin, 2021). They further provide a means of greater control over workers and alienation of workers from the products of their labor (Attoh, Wells, & Cullen, 2019; Iveson & Maalsen, 2018).

In these discussions there is some disagreement between those who view *labor* as the primary generator of value, and those who instead see value being driven by *data*. Srnicek, for instance, questions whether markets mediate the production of surplus value, whether there is a socially necessary labor time to produce value on platforms, and whether platforms are a boon or a parasite to capitalism: "Rather than exploiting free labour, the position taken here is that advertising platforms appropriate data as a raw material" (Srnicek, 2017, p. 56). Others are less direct with their position, and instead analytically focus on data to the relative exclusion of labor (e.g., Cohen, 2018), and still others question the analytical value of the term altogether (e.g., Gandini, 2021). At question is whether these processes of digital labor constitute Terranova's (2014) "free labor" insofar as it is rooted in a Marxian conceptual lineage. However, as Greene and Joseph (2015, p. 225) argue, drawing heavily on Fuchs (2010), "the labour theory of value holds, even as labour is increasingly fragmented, skilled, reskilled and deskilled. . . . [V]alorization is still realized by companies like Facebook or Twitter. . . . Marx's original conception of abstract general labour can be updated to take into account these new forms of affective labour." Elsewhere, Fuchs and Sevignani (2013) distinguish between digital work and labor in ways that attend to previous critiques, and beyond this, we should also recognize that to produce the market-exchangeable commodity of data that is central for Srnicek requires a subject engaging in productive activities; thus I contend here that labor remains a critically important category for understanding digital practices and digital capitalism.

Thinking in terms of *labor* further draws our attention to how constant capital, or the machinic solidifcation of the production process, currently holds the potential to eclipse variable (human) capital through intensifying automation. This potential—or *trend*, depending on the author—has been captured with great applause by some, who, like Bastani (2019) and Srnicek and Williams (2015), see new automated digital technologies as liberating the masses from work altogether. To be sure, automation has always been recognized as a core component of capitalist economies (Benanav, 2019). However, advanced development of artifcial intelligence, predictive analytics, sophisticated machine learning algorithms, and decreasing costs of computational memory and processing power have increased the degree to which tasks typically delegated to humans are instead delegated to machines (Arboleda, 2020; Egbert, 2019; Eubanks, 2018). Within these broad debates, the particular discussions of robots typically fall into "the tempting yet extreme positions of either dystopian angst or positive 'boosterism'" (Bissell & Del Casino, 2017, p. 437). Robotics are often framed as directly replacing human workers, as companies like Tesla and DoorDash have actively promoted (Benanav, 2019; Robotics Online Marketing Team, 2019). However, even overlooking the historical precedent of automation, there are strong reasons to believe that robotics and automation will continue alongside human laborers (Spencer, 2018).

While such research has generated critically important insights into digital labor practices, relations, and distributions, it leaves under-theorized the ways in which digital labor happens in, through, and with spaces (c.f., Strauss, 2020). Indeed, extant interdisciplinary literature theorizes digital labor as both the use of digital media to create use-value (Fuchs, 2016), *and* the systems of labor that produce the media themselves (Fuchs, 2013)—but with space as a secondary consideration when considered at all (see Scholz, 2013). This omission persists despite tacit acknowledgement that digital technologies signifcantly reconfgure spaces of labor and the structures that support it (Gregg, 2011; Jarrett, 2020).

#### *Geographic Engagements with Digital Labor*

Geographers and spatially-minded scholars more directly confront the spatialities of digital labor, but typically leverage a Euclidean view of space, where spaces, demarcated by geographic measures of latitude and longitude, serve as vessels for human activity. Spaces, in this conception, are simply bounded areas where things happen. This often leads to visualizations of spatial patterns using common geographic maps: for instance, country borders might be intact, map distances might be proportional to ground distances, region names are used unproblematically, and north might point upwards. Research in this area has established that such digital labor practices vary markedly across the globe. The map of digital labor shows strong disparities in *where people voluntarily produce data*, where different *kinds of data* are collected about people, and *which places around the world* are represented in online platforms. Namely, online repositories like Wikipedia and Google StreetView often refect historical patterns of colonization (Graham, Hale, & Stephens, 2011; Graham, Straumann, & Hogan, 2015). Other digital-geographic trends such as "smart city" programs, which rely on tech-savvy urban denizens to perform data analytics in "loving service" to the city, refect unsurprising patterns, being located predominantly in the Global North, India, and East Asia (Burns & Andrucki, 2021; Macrorie, Marvin, & While, 2021). Much of the geographical analysis of digital labor is conducted using spatial units such as regional or national borders (see, e.g., Ojanperä, Graham, & Zook, 2019), or uses the traces of digital labor (e.g., social media posts, logs of edits in platforms, trajectories of movement) aggregated to such units (see, e.g., Chapple, Poorthuis, Zook, & Phillips, 2021; Rani & Furrer, 2021). At a smaller scale, the fgure of the "workplace" fgures strongly in these discussions as a key space of *remunerated* work—whether those workplaces are envisioned to be the physical working environments (Gregg, 2011; Richardson, 2018), or the platforms that enable work execution and worker management (Bucher, Fieseler, Lutz, & Buhmann, 2021; Irani, 2015). In the former, the workplace is the bounded space of work usually delimited by physical barriers such as walls and frm campus space; the latter is accessed through web browsers, smartphone apps, and dedicated software—in most cases, either anchored in physical spaces for internet connectivity, or recording one's movement through geolocation services. This geometric conception of space informs the "proximity" debate that relies on, for example, dichotomous views of "near" and "far" (Rutten, 2017), and which describes the regional dynamics of digital industries (Dallasega, Rauch, & Linder, 2018; Losurdo et al., 2019). In each of these cases, the spatial-analytical units are rooted and made legible in Euclidean geometries.

Thus, when an analysis of digital labor does indeed mobilize a spatial lens, the research typically uses a *Euclidean* view of space, where spaces, demarcated by geographic measures of latitude and longitude, serve as vessels for human activity. Spaces, in this conception, are simply bounded areas where things happen. This often leads to visualizations of spatial patterns using common geographic maps: For instance, country borders might be intact, map distances might be proportional to ground distances, and north might point upwards. Insofar as Euclidean geometries

focus on what Lefebvre (1991) called "real space", it is analytically synergistic to think of labor as a discrete and intentional activity that is remunerated by an employer or sponsor (see, e.g., Bucher & Fieseler, 2017). To be clear, the relation between Euclidean geometries and discrete remunerated work is not a *necessary* relation, but one that fnds mutual productivity. In contrast, geographers have long conceptualized space as *active* in the production of social relations (Coe & Jordhus-Lier, 2011; Harvey, 2009), and as more than Euclidean in orientation: they are also relational, imagined, and highly contingent (Bell & Valentine, 1995; Gregory, 1994; Lefebvre, 1991; Staeheli & Lawson, 1995). There are also strong reasons to think more broadly about labor than as discrete, intentional, and remunerated activities, to include "aesthetic or semiotic" (Scott, 1997, p. 323) economies—such as those linked to attention and libidinal energy (Stiegler, 2009/2010)—that circulate through them (Dean, 2010; Neff, 2017). While the persistence of what Terranova (2014) calls "free labor" should not be dismissed, what I am arguing here is instead that digital labor research must consider a broad range of activities beyond discrete, intentional, and remunerated work. In other words, current engagements with digital labor's spaces move several key socio-political processes outside of the purview of research.

Scholars are increasingly aware of these limitations, calling—usually implicitly—for broader conceptual engagement in this area (Aytes, 2012; Graham & Anwar, 2019). Graham (2020), for instance, has recently offered a "conjunctural geographies" approach to the digital labor re/producing platform urbanism. For Graham, conjunctural geographies are the relational spaces that platform frms *produce* in order to both be infuential and still unaccountable. Mahmoudi and Levenda (2016) turn relational attention toward "immaterial labor" (see also Hardt & Negri, 2004), lending insights into how planetary urbanism is increasingly transforming "rural" areas. Hoffman and Thatcher (2019) advocate for an explicitly *topological* approach to visualizing urban data, breaking from a Euclidean-centered analytical frame, similar to the ways in which Bergmann and Lally (2021) propose a "geographical imagination systems" that likewise highlights the value of thinking topologically. Finally, expanding research on automation raises important questions about the role of non-human animals, machines, and sociotechnical artifacts in systems of digital labor (Amoore, 2013; Bastani, 2019; Bissell & Del Casino, 2017; Srnicek & Williams, 2015). Regarding the latter, Arboleda (2020) argues that increasing automation (within his empirical context: mining), rather than leading to the end of work, instead creates new gendered, racialized, and degraded forms of precarious work; in other words, non-human laborers like automated trucks, sensors, drones, and drills produce new relations between mine workers. Across all forms of digital labor, scholars are also increasingly recognizing the important affective, and often gendered, dimensions of platform-mediated work (Bucher & Fieseler, 2017; Schwiter & Steiner, 2020; Spangler, 2020).

Despite this growing recognition of the need for expanded conceptual resources for digital labor research, its conceptions of space and the kinds of labor that may happen in/with/through them remain underdeveloped. In short, what is needed is new ways of thinking about the spaces of digital labor. We need new ways to take up the challenge of moving "beyond the geotag" (Crampton et al., 2013; Shelton, 2017) to consider the ways space is produced by, for, and alongside digital labor practices and processes. We must locate digital labor beyond *just* Euclidean geometries to think relationally about space as an active agent, and "labor" to include the senses mobilized by scholars like Terranova (2014), Zittrain (2008), and Stiegler (2009/2010), where "labor" is not just conscious, active, and remunerated *work* but is diffused across quotidian and often invisible practices such as decoding CAPTCHA and "paying attention".

#### **Relationality and Digital Labor**

I contend that a diversity of relational thinking approaches can advance our understanding of digital labor. Here I would like to briefy review how relational spatial thinking has been taken up in geographical analysis, borrowing from developments in related social science disciplines. For several decades now, geographers have found that a Euclidean framework is unable to properly capture the contingent, dynamic, globally-connected, and often contradictory relations that characterize social processes across space. Following Elwood, Lawson, and Sheppard (2017, p. 746), I mobilize relationality as (1) a socio-spatial ontology that "conceptualiz[es] space itself as constituted through relations that extend beyond a singular place", (2) an epistemological stance that is open to contingent and often contradictory relations, and (3) a politics of possibility that "disrupts hegemonic modes and relations of knowledge production" (Elwood et al., 2017). In this, Elwood et al. (2017) draw most clearly on Massey's (1994) conception of local space as constantly reproduced from the nexus of global networks and fows of capital, power, knowledge, and spatial histories. Relationality prioritizes relations and contexts over individual actors and expects that actors' strategies and activities are non-deterministic and open-ended (Bathelt & Glückler, 2005; Boggs & Rantisi, 2003; Yeung, 2005). Rather than thinking of actors as independent, ontologically stable entities, relationality conceives of actors, boundaries, and spaces as in constant fux, reiteratively co-produced, and as anti-essentialist (DeVerteuil, Power, & Trudeau, 2020).

For Amin (2004, p. 34), a relational framework:

re-cast[s cities and regions] as nodes that gather fow and juxtapose diversity, as places of overlapping—but not necessarily locally connected—relational networks, as perforated entities with connections that stretch far back in time and space, and, resulting from all of this, as spatial formations of continuously changing composition, character, and reach (Amin & Thrift, 2002). Seen in this way, cities and regions come with no automatic promise of territorial or systemic integrity, since they are made through the spatiality of fow, juxtaposition, porosity and relational connectivity.

Here, Amin (2004) draws on conceptual material that has been leveraged for a range of relational approaches. Similar to Murdoch's (2006, p. 18) summary of relationality, spaces "should not be seen as closed and contained but as open and engaged with other spaces and places", extending beyond political boundaries such as municipal jurisdictions, to connect distant geographies in complex networks and fows. Spaces do not exist *a priori* the actors and processes that produce them for particular purposes and with particular interests in mind; in other words, according to Doel (2007, p. 809), "space is *continuously* being made, unmade, and remade by the incessant shuffing of heterogeneous relations". Various spatial formations such as regions and supply chains are *produced* for the creation and maintenance of socio-political and economic relations (Bathelt & Glückler, 2003). In this framework, subjects are likewise produced relationally: individual and collective formations tie together their relations to processes, spaces, natures, technologies, and other individuals/groups (Delfanti & Arvidsson, 2019), and indeed even the distinction between "the social" and "the natural" begins to deteriorate (Whatmore, 1999). In this, geographers draw on a long history of relational thinking in related disciplines such as sociology, where, according to Emirbayer (1997, p. 287), "[r]elational theorists reject the notion that one can posit discrete, pregiven units such as the individual or society as ultimate starting points of sociological analysis". Jones (2009) locates the lineage of relational spatial thinking through Harvey's (2009) spatial dialectics (see also Sheppard, 2008) back to Leibniz's non-Euclidean philosophy; in contrast, absolute space is more characteristic of Newtonian philosophy. Quoting Callon and Law (2004, p. 6), Jones argues that thinking relationally "is an empowering perspective. It suggests that space and its orders are always open such that 'the local is an achievement in which a place is localized by other places and accepts "localization" itself. But this means that no place is closed off".

Relational spatial thinking troubles the ontological certainty with which digital labor is often approached. Rather than falling for "the territorial trap" (Agnew, 1994) cast in a Euclidean geometric framework that takes units such as the nationstate as the containers in which activities happen, relationality reminds us that digital labor and the digital *laborer* emerge as phenomena because of the non-Euclidean relations between platform capitalism, global precarity and inequality, and the intimate relationships that germinate much of social media. To insist on Euclidean boundaries of the nation-state, the city, and various mesoscales risks what Angelo and Wachsmuth (2015) call a "methodological cityism", later taken up by Arboleda (2020) as "methodological nationalism", in which scholarship privileges the absolute geographies of the city or nation-state, masking processes that tie those units into broader geographies – and in many cases disrupt those boundaries altogether. Euclidean geometry analyses also frequently aggregates occupants of similar absolute geographies into the same analytical unit despite at times representing different relational geographies (e.g., backgrounds, citizenship, relation to capital, social capital).

These absolute geographies, while foregrounding important spaces of policymaking, juridical enforcement of labor laws, and scalar production of labor markets, obfuscate the relational geographies that are produced in order to institute digital labor practices. While a microtasker's physical location in Kuala Lumpur might be important for asking particular questions, their Euclidean position on the globe tells us less about the relational spaces of fnancial speculation and tax havens that led to Amazon's Mechanical Turk's prominence in digital labor markets, the politicaleconomic precarity of Malaysian workers produced dialectically with multinational corporations' drive to minimize labor costs by exporting particular forms of labor to the Global South, and the spaces of care and social reproduction that support that digital laborer's work. More than previous labor regimes, digital labor transacts on a planetary scale, and regulatory frameworks have been slow to adapt beyond absolute jurisdictions like the nation-state. To think about the relational spaces of digital labor opens opportunities for (re)thinking how and where digital labor occurs, and therefore how it should be regulated. A relational-spatial approach might thus take the platform less as a website or smartphone app that one enters and exits, and more as a mediator of global political-economies, sociospatial divisions of labor, and spaces for the production of intimate feelings of belonging or marginalization.

However, the ontological certainty of Euclidean geometries also informs how digital labor researchers think about *work itself*. While the importance of the workplace and its remunerative tendencies should not be underestimated (even in a post-Covid world), a decade of research on attentional economies reminds us of the quotidian systems that valorize practices of scrolling, searching, and streaming (Ash, 2015; Celis Bueno, 2017; Crogan & Kinsley, 2012; Terranova, 2012). In everyday contexts, one need not be employed to produce proftable content by posting on social media, or by reporting a "speed trap" within a navigation app. Rather, a relational geographies perspective reminds us that spaces are produced by digital technologies—a web platform, an urban services app, an advertisement interrupting an online video—precisely to enroll large numbers of (usually unwitting) laborers into the value-production process. Moreover, these laborers are often enrolled by mobilizing *other* relational geographies, such as the affective spaces of viewing geographically-distant friends' Facebook posts, or an ad for a political candidate. As many remind us, digital spaces like social media, advertisements, and suggested videos, are all carefully curated by algorithms that we have trained through our web browsing, email content, and clicks on links (Cheney-Lippold, 2017; Noble, 2018): As Mark Zuckerberg once responded to United States Senator Orrin Hatch's inquiry about Facebook's source of proft, "Senator, we run ads." These spaces affectively compel users to produce content, without compensation beyond the privilege of using platforms' services, and often subtend the production of new forms of social life, communities, and knowledge politics (Burns & Wark, 2020; Hine, 2000; Miller & Slater, 2000; Nagle, 2017); they are both *produced* spaces and *productive* spaces, and deeply relational. Jonathan Zittrain (2008) has likewise pointed out that Optical Character Recognition—and related machine learning algorithms designed to translate images into text—are often trained by unsuspecting users of CAPTCHA (engelia besik, 2014). In other words, everyday activities have been intensely woven into production of value such that one no longer need be in a workplace or even *intentionally* working to be producing highly valuable information and content. That such a broad range of labor is unremunerated has led Qiu (2016) to call such digital labor "iSlavery".

#### **Conclusion**

In this chapter, I have argued that space is under-conceptualized in digital labor research, leading to the omission of a range of important socio-political processes. When space is considered at all, research typically mobilizes absolute spaces rooted in Euclidean geometries, most immediately operationalized as geopolitical boundaries, and that it is usually concerned with discrete and intentional acts of remunerated work. Research is beginning to recognize the limited analytical purchase of these spatial underpinnings, and new conceptions of space are needed and beginning to emerge. Among other important implications, a relational thinking approach raises the need to reconsider how digital labor is regulated: perhaps instead of locating digital labor within the boundaries of a nation-state, regulators should consider the planetary scale of platforms, digital capitalism, and the workers that make and use them. A relational spatial thinking approach opens possibilities for thinking otherwise about the spaces of digital labor, as taking place in non-Euclidean spaces such as the affective spaces of social media and the spaces emerging from broader political-economic processes. In these relational spaces, labor consists of mundane, quotidian digital practices such "paying attention" and interacting with geographically-dispersed communities.

Following Elwood et al. (2017), this re-spatialization of digital labor has tremendous political implications. For one, it reminds us that people's everyday spaces are not limited to their immediate surroundings, and that the systems of care and belonging that enroll digital participation (Dourish & Satchell, 2011) do not easily map onto Euclidean geometries. The heterogeneity within geographic units is less important than the epistemological consequences of recognizing the limitations of a Euclidean framework. Second, spatializing attentional economies draws to our attention the processes by which digital spaces actively recruit labor that goes unpaid. While scholars have long recognized the proftability of attention and digital interactions, conceiving of them as spaces gets us to think differently about their relations with a diverse set of human and non-human actors.

Looking forward, turning attention to the relational spaces of digital labor raises many fundamentally important questions and considerations. First, does digitalization offer particular infections of the now longstanding processes of immaterial labor (Dyer-Witheford, 2001; Hardt & Negri, 2004; Lazzarato, 1996)? Does the materiality of digital infrastructures link immaterial labor with other socio-natural implications, including global climate change and continued deterioration of the commons? Second, does the digitalization of relational-spatial labor necessary lead to the proletarianization of laborers, as hypothesized by Stiegler (2009/2010) and Dyer-Witheford (2015)? Or, in the contrary, does the indeterminacy of digital technologies retain a glimmer of hope of subverting global capitalism or on a smaller scale empowering some individuals and communities? Lastly, how are new activities valorized for capitalist logics, or are post-capitalist labor regimes emerging in the spaces of digital technologies?

#### **References**


**Ryan Burns** is an Associate Professor in the Department of Geography at University of Calgary, a Fellow of the Royal Canadian Geographical Society, and a Visiting Scholar with University of Erlangen-Nuremberg. His interdisciplinary research at the intersection of digital geographies, urban studies, GIScience, and Science & Technology Studies is interested in the social, political, and urban transformations of new digital technologies. He is a public scholar, conducting work of political import to various communities, and communicating research outcomes to broad audiences. He holds editorial board positions with *ACME: International Journal of Critical Geographies*, *Digital Geography & Society*, and *Frontiers in Big Data*, and is the vice-chair for the Digital Geographies Specialty Group of the American Association of Geographers.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Part III Ethics, Norms, and Governance**

## **Chapter 11 The Promise and Prospects of Blockchain-Based Decentralized Business Models**

**Andranik Tumasjan**

More than a decade ago, blockchain technology emerged as the backbone of the cryptocurrency Bitcoin (Nakamoto, 2008). Shortly after the network's initial implementation in 2009, Bitcoin already began to inspire a variety of different blockchain technology use cases across diverse industries, spanning from new cryptocurrencies (e.g., Litecoin) to novel business models in the fnancial, insurance, media, energy, and supply chain sectors, to name just a few examples (e.g., Dutra, Tumasjan, & Welpe, 2018). In the late 2010s, there was a worldwide hype around blockchain technology due to the (often exaggerated) promotion of different desirable characteristics, including catchwords such as transparency, immutability, security, automation, trustlessness, and decentralization (Tapscott & Tapscott, 2016; Tumasjan, 2021).

Indeed, one of blockchain technology's central promises has been and continues to be the notion of "decentralization" (Hoffman, Ibáñez, & Simperl, 2020; Tumasjan, 2021; Walch, 2019). This promise originally stems from the Bitcoin developers' goal to create a "purely peer-to-peer version of electronic cash" with which users could avoid "going through a fnancial institution," as Satoshi Nakamoto (2008) explained in his Bitcoin whitepaper (Nakamoto, 2008, p. 1). Notably, Nakamoto (2008) makes no direct mention of "decentralization" (or related terms). Rather, the notion of decentralization has been perhaps most heavily popularized by Vitalik Buterin (2014a), the founder of Ethereum (i.e., the largest and most established general purpose blockchain platform). In his initial Ethereum whitepaper, titled "A Next-Generation Smart Contract and Decentralized Application Platform" (Buterin, 2014a), he lays out decentralized application ideas that stretch beyond a peer-to-peer currency, such as a decentralized fle

© The Author(s) 2024 J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_11

A. Tumasjan (\*)

Chair for Management and Digital Transformation, Johannes Gutenberg University Mainz, Mainz, Germany e-mail: antumasj@uni-mainz.de

storage, online voting, marketplaces, and so-called decentralized autonomous organizations (DAOs; i.e., virtual organizations that are owned and governed by their members using blockchain technology for their administration)—all of which could be built on top of the Ethereum platform. This notion of decentralization is also embodied in his famous quote: "Whereas most technologies tend to automate workers on the periphery doing menial tasks, blockchains automate away the center. Instead of putting the taxi driver out of a job, blockchain puts Uber out of a job and lets the taxi drivers work with the customer directly" (Vitalik Buterin, as cited in Tapscott & Tapscott, 2016, p. 34).

Hence, in the years that followed, decentralization has become one of the most often used catchwords in the blockchain discourse and inspired the development of a myriad of blockchain-based decentralized business models (BDBM) and applications (Schneck, Tumasjan, & Welpe, 2020; Tumasjan & Beutel, 2018). For instance, OpenBazaar offers a peer-to-peer marketplace as a blockchain-based alternative to services such as Ebay; Steemit offers a blockchain-based social media network as an alternative to services such as Facebook and Twitter; and Synthetix allows anyone to create and trade derivatives on assets (e.g., stocks) as a decentralized alternative to traditional banks. Likewise, cryptocurrency exchanges, wallet providers, and other cryptocurrency service providers (e.g., cryptocurrency index funds) have emerged offering a variety of nontraditional fnancial services around the management of blockchain-based digital assets. Moreover, large corporations have started to use blockchain-inspired distributed databases in company consortia (i.e., distributed ledger technologies, such as Hyperledger Fabric) with the goal of "decentralizing" power and control around data management and business processes (Kernahan, Bernskov, & Beck, 2021). In addition, these developments have been accompanied by a fast-growing body of research on BDBM and marketplaces (Hoffman et al., 2020) covering all imaginable industries, most prominently fnance, healthcare, supply chain, and energy.

However, despite all these developments in the past decade and blockchain technology's purportedly desirable characteristics, BDBM mainstream usage still seems far away, as BDBM remain a niche market in comparison to extant traditional, "centralized" digital business models (Schneck et al., 2020). Why has the mainstream adoption of BDBM not advanced further, despite the appeal of decentralization in a world dominated by heavily centralized and criticized institutions, such as banks and digital platforms (e.g., the GAFA: Google, Apple, Facebook, Amazon; Tumasjan, 2021)? Alongside often mentioned factors—such as major challenges of technical scalability, security, regulation, technology acceptance, and legitimacy (cf. controversial innovations; Delacour & Leca, 2016; Glückler, 2014)—a crucial factor concerns the regular customers' perspective and their willingness to use BDBM (Tumasjan & Beutel, 2018). Although researchers have previously often mentioned the customers' perspective as a barrier to mainstream adoption (e.g., Chen & Bellavitis, 2020), they have limited themselves to addressing BDBM's usability and user friendliness. However, as this article will show, high levels of decentralization in BDBM require a substantial amount of cognitive effort from customers, demanding considerably higher levels of knowledge and expertise, selfreliance, and responsibility that developers cannot solve, or even trade off through, merely improving usability and user friendliness.

To analyze BDBM's promise from a regular customers' perspective, this analysis focuses, frst, on the different meanings of the term decentralization and creates a typological framework to better understand the different projects and business models aimed at decentralizing. Second, the article derives and discusses decentralization's implications for mainstream adoption of BDBM from a regular customer's point of view. The chapter thereby contributes to our understanding of the relationship between knowledge and technology—the core aim of this book—by showing how decentralization requires the regular customer to demonstrate elevated levels of both knowledge and expertise.

#### **Background: Blockchain Technology**

Blockchain technology is data infrastructure with which users can share, synchronize, validate, and replicate digital data across a network that is spread over multiple entities (e.g., Risius & Spohrer, 2017). Blockchain technology relies on decentralized structures without the need for centralized maintenance or data storage (e.g., Friedlmaier, Tumasjan, & Welpe, 2018; Nguyen & Kim, 2018). Hence, its users can securely create, maintain, and validate any form of digital transaction without the need for a centralized intermediary or governance mechanism to establish trust among agents in the network (e.g., Casino, Dasaklis, & Patsakis, 2019; Meijer & Ubacht, 2018).

The currently most prominent use case is the cryptocurrency Bitcoin. The frst application of blockchain technology, Bitcoin's creator(s) launched the network in 2009 to create a global peer-to-peer electronic cash system secured by a distributed consensus-building mechanism (mining, i.e., providing decentralized actors with the computational power to store, validate, and maintain the network) by combining cryptographic hashing (Nadeem, 2018) and insights from game theory (Bonneau et al., 2015). By building a global peer-to-peer cash system, Bitcoin's creator(s) aimed at cutting out intermediaries (e.g., central banks and commercial banks) in the fnancial sector and providing an electronic payment system for anyone (Nakamoto, 2008).

The second largest blockchain protocol in terms of market capitalization (Coinmarketcap, 2023), the Ethereum network, is a distributed computing platform and operating system for the application of so-called "decentralized applications" (dApps; i.e., digital applications directly connecting users of a decentralized network; cf. Wright & De Filippi, 2015) on top of a blockchain protocol. It was the frst protocol to enable the application of so-called "smart contracts," in other words, algorithms that automatically execute transactions when predetermined

conditions occur, following a simple if-this-then-that logic. These smart contracts are the foundation for the creation of new applications such as DAOs or other nonfnancial applications that do not require their own novel protocols (cf. Buterin, 2014b).

Developers have proposed and piloted many other applications beyond cryptocurrencies, such as supply chain tracking and tracing and fnancial, healthcare data, identity, and energy management (Casino et al., 2019). In these cases, they have discarded the original public and open blockchain technology approach (e.g., Bitcoin and Ethereum) to instead propose and promote new "blockchain-inspired" solutions. These blockchain-inspired solutions often fall under the label of "distributed ledger technologies" (DLT; i.e., digital databases that are shared and synchronized across multiple instances, such as Hyperledger Fabric) and can be categorized as so-called "private-permissioned" blockchains.

Hence, a crucial difference in the broad feld of blockchain technology today concerns *public* versus *private* on the one hand, and *permissionless* versus *permissioned* blockchains on the other hand, the combination of which results in a 2 × 2 matrix (Beck, Müller-Bloch, & King, 2018). In general, public blockchains are open to anyone who wishes to view and enter transactions, whereas private blockchains only permit such activity after registering with the network's central administrator (Beck et al., 2018). Permissionless blockchains allow anyone to not only view and enter but also to validate transactions, whereas in permissioned blockchains validating is reserved to registered participants. In a public-permissionless blockchain (e.g., Bitcoin), anyone can fully participate in the network, in other words, they can view, enter, and validate transactions. In a public-permissioned blockchain (e.g., Sovrin), however, although anyone can view and enter transactions, only authorized participants can validate them. In privatepermissioned blockchains (e.g., Hyperledger Fabric), only registered participants can view, enter, and validate transactions.

Importantly, the developers of almost all of these blockchain technology and blockchain technology-inspired solutions state and stress that they aim at decentralizing certain aspects of digital asset transactions. To what extent they actually do so, however, varies immensely, as will be shown in the Section "Different kinds of BDBM: Toward a typology" below. The following two sections review extant research on BDBM (Section "Extant research on BDBM") and show the problematic use of the term decentralization in research and practice (Section "Understanding the Term "Decentralized" in BDBM"). To systematize and make transparent the different uses and meanings of decentralization in extant BDBM research and practice, the Section "Different kinds of BDBM: Toward a typology" develops a typological two-dimensional framework yielding four BDBM archetypes. Finally, the Section "Implications for BDBM types' mainstream adoption from a customers' perspective" derives the implications for the four BDBM types' mainstream adoption from a customers' perspective, before discussing (Section "Discussion") and concluding (Section "Conclusion") this analysis.

#### **Extant Research on BDBM**

Since as early as 2010, researchers have been conducting an emerging and increasingly growing stream of BDBM-related investigations. A literature search using a comprehensive range of keywords related to BDBM1 in the felds "title," "abstract," and "keywords" in the bibliographic database Scopus yielded N = 967 publications, mostly in the subject areas of computer science (N = 757), engineering (N = 426), decision sciences (N = 242), mathematics (186), and business, management, and accounting (170).2 To visualize the extant BDBM research landscape, the publications' keywords were analyzed using the software VOSviewer (version 1.6.17; van Eck & Waltman, 2010). Specifcally, the keywords were analyzed based on cooccurrence and the map was restricted to keywords that appeared at least 10 times, yielding a total of 90. The resulting research landscape is shown in Figure 11.1 below.

**Fig. 11.1** BDBM research landscape based on publication keywords. Source: Design by author

<sup>1</sup>The exact query was ((TITLE-ABS-KEY(blockchain or "distributed ledger technolog\*") AND TITLE-ABS-KEY(decentraliz\* OR decentralis\* OR disintermediat\*)AND TITLE-ABS-KEY("business model" or "business models"))) OR ((TITLE-ABS-KEY(blockchain or "distributed ledger technolog\*") AND TITLE-ABS-KEY("decentralized market\*" or "decentralized exchang\*" or "decentralized platform" or "decentralized e-commerce" or "decentralized application\*"))) OR (TITLE-ABS-KEY("decentralized fnance")) AND (LIMIT-TO (LANGUAGE,"English")).

<sup>2</sup> In the Scopus database, a publication can be assigned to multiple subject areas.

As is to be expected, and evident from Figure 11.1, the notion of decentralization indeed occupies a central role in the extant BDBM publications. In fact, decentralization (including related keywords, such as "decentralized system" and "decentralized management") is the third most mentioned keyword (the frst two being "blockchain" and "smart contract"). Six clusters emerge from the present analysis, as shown in Figure 11.1. Cluster 1 (23 keywords) mainly contains research about BDBM in the context of enterprise applications in different industries, comprising keywords such as "distributed ledger technology" (DLT), "supply chain," "industry 4.0," "healthcare," "smart city," "transparency," and "digital transformation." Cluster 2 (16 keywords) mainly contains research about BDBM in the energy sector, comprising keywords such as "decentralization," "distributed energy," "micro grid," "power markets," "peer to peer," "renewable energy," "electric power transmission," "e-commerce," and "cost effectiveness." Cluster 3 (15 keywords) mainly contains research about BDBM in the context of cryptocurrencies, comprising keywords such as "bitcoin," "cryptocurrency," "electronic money," "decentralized exchange," "decentralized fnance," and "proof of work." Cluster 4 (15 keywords) mainly contains research about BDBM in the context of data analytics and management, comprising keywords such as "cloud computing," "distributed systems," "data analytics," "machine learning," and "computation." Cluster 5 (11 keywords) mainly contains research about BDBM in the context of data security and privacy, comprising keywords such as "access control," "authentication scheme," "cryptography," and "security and privacy." Cluster 6 (10 keywords) mainly contains research about BDBM and smart contracts, comprising keywords such as "smart contract," "Ethereum," "decentralized application," "scalability," and "automation."

Overall, what is gleaned from this keyword analysis is that BDBM researchers have moved far beyond examining cryptocurrencies in general, and are examining BDBM in enterprise settings and a range of different industries. In terms of industries beyond fnancial services, there seems to be an emphasis on the energy sector, followed by healthcare and supply chains. Importantly, decentralization is close to the center of the research landscape with strong connections to all research clusters, while being closest to and part of Cluster 2, which is mostly related to energyrelated keywords (see Fig. 11.2).

#### **Understanding the Term "Decentralized" in BDBM**

Although decentralization is one of the most frequently used terms in the blockchain technology discourse in both practice and research, many confusions and ambiguities about its meaning remain (Walch, 2019). This is primarily because most describers of blockchain technology in both practice and research publications do not properly defne what they mean by the term, instead merely listing decentralization as a property of blockchain technology (Tumasjan, 2021). Moreover, the stated goal of decentralization also differs substantially across different applications and actors in the blockchain discourse. These meanings continue to range widely, stretching from

**Fig. 11.2** Location and connections of the term decentralization. Source: Design by author

implementing secured shared data management and transparency in the context of enterprise use (e.g., DLT in supply chain using IBM's Hyperledger Fabric) to establishing cryptocurrencies with the aim of disintermediating or abolishing traditional fnancial and governmental institutions (e.g., Bitcoin), or even the state as a whole (Atzori, 2015). Whereas in the former cases of DLT decentralization happens within the framework of traditional hierarchical organizations and institutions, in the latter cases the term is used to describe new blockchain-based digital assets aimed at providing an alternative to traditional government currency and/or the incumbent fnancial system and/or established governmental institutions. Moreover, actors also use the term to describe non-hierarchical or cooperative forms of organizations or marketplaces, where anyone can connect to contribute to the organization via writing code, applications, voting, and/or using the services (e.g., DAOs). In these cases, decentralization is meant as an antidote to the power and organization of large corporate frms and digital platforms (e.g., digital platforms, such as the GAFA) toward establishing digital cooperatives (Kollmann, Hensellek, de Cruppe, & Sirges, 2020). Thus, a variety of different actors have been using (and continue to use) decentralization in the context of blockchain technology to describe completely different means and ends.

Unfortunately, scholars (including myself) have often neglected properly defning what is meant by the term "decentralization." Even when they have spelled out a defnition, the results have varied substantially (Hoffman et al., 2020). In their review, Hoffman et al. (2020) list the 16 most relevant publications with different meanings of decentralization.

In many cases, researchers have focused one aspect of decentralization (e.g., decentralized governance and disintermediation of incumbent institutions or technological-infrastructural distributedness of database nodes) or mixed the different meanings. For instance, Chen, Pereira, and Patel (2021) defne decentralization as "the extent to which power and control in governance structures and decisions are allocated to developers and community members" (p. 13), referring to the governance dimension. Confating both aspects, Chen and Bellavitis (2020, p. 2) contrast "centralized fnancial systems" with "decentralized fnancial systems": In the former, "fnancial institutions are the key intermediaries mediating and controlling fnancial transactions," whereas in the latter, "fnancial transactions are facilitated . . . by decentralized peer-to-peer networks" and "no single entity can accumulate suffcient monopoly power to monopolize the network and exclude others from participating." Thus, in this view, both the governance and the technologicalinfrastructural aspects are combined. In contrast, there exists a large body of work on DLT in practice and research, whose authors have focused less on the governance aspect and more on the technical side of decentralization (i.e., distributed data structures). For instance, numerous researchers have dealt with the decentralization of data management in healthcare (e.g., De Aguiar, Faiçal, Krishnamachari, & Ueyama, 2020), energy (e.g., Ante, Steinmetz, & Fiedler, 2021), and automobile (e.g., Fraga-Lamas & Fernández-Caramés, 2019) industries with a focus on decentralized ways of data management rather than the disintermediation of powerful incumbent institutions.

In sum, the blockchain discourse in both research and practice continues to harbor considerable ambiguity and confusion around the term decentralization. This state has been creating misunderstandings not only in the industry and scientifc discourse but also among the general public about the possibilities and goals of decentralization based on blockchain technology. As a result, some have suggested dropping the term altogether due to its fuzziness (Walch, 2019). To make sense of the different meanings of decentralization in BDBM and to derive the implications of decentralization for BDBM mainstream adoption, the following section will develop a typological framework characterizing the extent of actual and desired decentralization in BDBM.

#### **Different Kinds of BDBM: Toward a Typology**

To further examine the phenomenon of, and research into, BDBM requires an understanding of its two underlying terms beyond "blockchain," namely "business model" and "decentralized." Although business model has a variety of defnitions, most researchers agree that business models can be defned as schemes that describe (at least) the who (customer group), what (value proposition), how (frm activities), and value capture (how money is made) dimensions of a business (Gassmann, Frankenberger, & Csik, 2014; Massa, Tucci, & Afuah, 2017). Thus, the present article uses this broad business model defnition to describe the notion of BDBM.

To disentangle the different meanings of "decentralization" in BDBM, this analysis builds on the two dimensions identifed by Walch (2019). Specifcally, Walch (2019, p. 41) pinpoints two meanings of decentralization in the context of the blockchain discourse, namely "resilient" (i.e., technical dimension: no single point of failure due to distributed nodes) and "free from the exercise of concentrated power" (i.e., governance dimension: no single entity exerts ultimate power due to distributed decision rights).

Building on Walch (2019), this article develops a framework with two dimensions to characterize extant BDBM: (1) infrastructural distributedness (i.e., technical dimension of decentralization) and (2) institutional disintermediation (i.e., governance dimension of decentralization). The frst dimension, infrastructural distributedness, refers to decentralization focused on the technical infrastructure. This focus includes characteristics such as distributed nodes, data sharing, and transparent data management. The second dimension, institutional disintermediation, refers to decentralization focused on the concentrated decision rights of powerful institutions. This focus includes characteristics such as the disintermediation of incumbent powerful corporations and/or governmental institutions and replacing them through virtual communities with collective voting for decision-making and joint ownership (e.g., digital cooperatives).

As evident, decentralization lies on a continuum on both dimensions, as the actual extent to which it is aimed at varies considerably between different applications and projects. Thus, the framework aims at including the entire bandwidth of decentralization ambitions. For instance, one could argue that creating a shared data management system for healthcare records comprises lower levels of decentralization ambitions than creating a purely peer-to-peer network for energy trading. Similarly, establishing cryptocurrency exchanges and wallet services also entails lower levels of decentralization ambitions than aiming at circumventing centralized services altogether and instead making transactions only in a peer-to-peer fashion using cryptocurrencies. Moreover, the decentralization extent of blockchain projects is not static but may change over time (Beck et al., 2018). For instance, developers intentionally centralized the blockchain-based peer-to-peer sharing economy project Swarm City's decision rights from the start to set up a productive application with the aim of decentralizing governance over time (Beck et al., 2018).

The two decentralization dimensions can be seen as independent from each other. Combining both dimensions yields a two-by-two matrix with four quadrants and four BDBM archetypes (see Fig. 11.3). The following paragraphs characterize the framework and the four resulting quadrants. In all instances, as of today, the fnancial sector applications are most advanced, whereas non-fnancial applications generally lag behind.

*Quadrant 1: BDBM-T1* This quadrant comprises BDBM projects that have a strong focus on both infrastructural and institutional decentralization. The main goal is to disintermediate incumbent powerful state institutions, the fnancial system, and/or frms by means of building a decentralized, and, thus, resilient network structure and by establishing decentralized governance. Examples include Bitcoin,

**Fig. 11.3** Typology of BDBM. Source: Design by author

Ethereum, and Decentralized Finance (DeFi) applications—such as Uniswap as well as decentralized marketplaces, such as OpenBazaar. However, the scope and targets of institutional decentralization differ tremendously between projects and participants. For instance, proponents of Bitcoin as the only required cryptocurrency (so called "Bitcoin maximalists") focus on establishing it as the sole digital fnancial asset and as an alternative to fat money, and, hence, traditional fnancial institutions. Whereas Bitcoin maximalists view Bitcoin as the sole necessary worldwide digital asset and favor abolishing fat money institutions (e.g., central and commercial banks), they do not favor community-owned DAOs (which are mainly built on the basis of other cryptographic tokens or currencies, so called "altcoins"). On the other hand, most projects in the feld of DeFi (DeFi; Schär, 2021) focus on building a more effcient and inclusive fnancial system by "replicat[ing] existing fnancial services in a more open and transparent way" (p. 153), mostly using Ethereum and Ethereum-based tokens (i.e., altcoins). Thus, DeFi goes beyond "merely" establishing a cryptocurrency or digital asset toward building a new fnancial services system independent of incumbent institutions. Moreover, there are also many non-fnancial projects where the focus is on building community-owned and fully democratically governed organizations (e.g., DAOs built in the frameworks of Aragon or DAOstack). Whereas fnancial management is always a component (e.g., to pay for efforts or vote according to tokens owned; Hülsemann & Tumasjan,

2019), in contrast to most DeFi applications, these project developers mainly focus on realizing goals in a fully open, transparent, democratic, and community-driven way without the involvement of traditional state and legal institutions. Whereas the latter is not necessarily the focus of DeFi applications, there are, of course, overlapping projects focusing on both goals.

*Quadrant 2: BDBM-T2* This quadrant comprises BDBM projects that have a low focus on infrastructural and a high focus on institutional decentralization. The main goal of these BDBM projects is to provide blockchain-based products and services as an alternative to traditional centralized products and services to disintermediate incumbent institutions. Extant company examples include centralized exchanges (e.g., Coinbase), wallet providers (e.g., Trezor), and cryptocurrency investment funds (e.g., Grayscale). In these BDBM, the focus is on helping customers using alternative means of digital asset transactions, thereby disintermediating existing centralized fnancial products and services (a high focus on institutional decentralization). However, these companies do not focus on building decentralized peer-topeer networks (a low focus on infrastructural decentralization), instead mostly using centralized infrastructure (e.g., Coinbase storing digital assets on centralized servers). These BDBM can be seen as an interface connecting traditional fnancial services to blockchain-based digital assets. They are accordingly often considered as an entry gate to using digital assets.

*Quadrant 3: BDBM-T3* This quadrant comprises BDBM projects and applications that have a high focus on infrastructural and a low focus on institutional decentralization. The main goal of these BDBM is to provide decentralized (in the sense of distributed and transparent) network infrastructures to improve shared business processes (e.g., shared data management and product tracking) but not to disintermediate incumbent government and fnancial institutions and large corporate frms. Extant examples are providers of enterprise and government DLT solutions (e.g., Hyperledger Fabric, R3, Enterprise Ethereum). The main idea of these BDBM is to gain effciencies within a business network of trusted partners where data-based business processes are stored, shared, and worked on in a decentralized, transparent, and cryptographically secure way. In these cases, the term decentralized comprises data distributedness and equal transparency and/or decision rights by all registered partners involved, and serves as a juxtaposition to a centralized "black box" data management solution controlled by one provider.

*Quadrant 4: BDBM-T4* This quadrant comprises BDBM projects that have a low focus on both infrastructural and institutional decentralization. The main purpose of these BDBM is to use blockchain technology and/or DLT inspired systems to build *centralized* data systems with a high level of security (e.g., cryptographic) and the possibility of programmability (e.g., smart contracts). Extant examples include central bank digital currencies (CBDC) being discussed and piloted worldwide. Importantly, CBDC projects do not aim at decentralizing at all. Thus, although these applications may be inspired by blockchain technology, they are not aimed at building decentralized infrastructure or institutions, but at building centrally controlled shared ledgers connecting central banks with commercial banks, market makers, and large corporations (Consensys, n.d.). Moreover, developers can implement Ethereum-inspired smart contracts to automate processes and ensure compliance with predefned if-then-rules (Consensys, n.d.). Thus, although blockchain technology may constitute the basis or the inspiration for BDBM-T4, "sharedness" rather than decentralization is the goal of these projects. As a result, despite their origin and/or inspiration may be stemming from blockchain technology, BDBM-T4 may not be considered decentralized business models in the sense of the initial blockchain technology idea.

#### **Implications for BDBM Types' Mainstream Adoption from a Customers' Perspective**

As evident from the analysis of the BDBM typology, the goals and extent of decentralization vary considerably across the four types. Thus, decentralization as a hallmark of BDBM does not adequately capture the variety of meanings that the term has across different BDBM implementations. Moreover, the decentralization discourse has been mainly led from a developers' and content creators' point of view (i.e., for whom decentralization in terms of the independence from incumbent digital platforms and powerful institutions is advantageous in many respects) rather than from the regular customers' point of view (i.e., for whom this sort of decentralization creates a clear trade-off between self-sovereignty and additional cognitive efforts in terms of attitudes, learning, and accountability), which may at least partly explain the mostly positive viewpoint of decentralization in the extant blockchain technology discourse.

This terminological ambiguity thus has consequences for BDBM mainstream adoption because it entails clear trade-offs (i.e., self-sovereignty vs. duties and responsibilities). Plainly, BDBM-T1 feature the highest barriers for mainstream adoption, followed by BDBM-T2 and BDBM-T3, then BDBM-T4. However, whereas BDBM-T2 may be seen as a (temporary) gateway toward BDBM-T1, BDBM-T3 and BDBM-T4 clearly are not decentralized in the initial sense and goals of blockchain technology (i.e., Bitcoin). Moreover, several BDBM-T1 also go beyond the initial level of decentralization Bitcoin represents, for example, building democratically governed and participant-owned cooperatives based on tokens (e.g., DAOs) or abolishing state governance altogether (Atzori, 2015).

The following paragraphs therefore analyze the prospects of mainstream adoption for the four BDBM types. The analysis concentrates on the regular customer's perspective, putting less emphasis on other important challenges, such as scalability, security, privacy, and regulatory issues, that previous researchers have extensively covered. To address the customer's perspective, the present analysis focuses on necessary paradigm shifts and efforts in the cognitive domain, such as attitudes,


**Table 11.1** Overview of the extent of shifts needed for mainstream adoption of the four BDBM types

*Note.* Source: Design by author

learning and competence, and responsibility and accountability. Table 11.1 summarizes the extent to which attitudinal and behavioral paradigm shifts are necessary for each of the four BDBM types.

*BDBM-T1* As outlined above, the challenges and barriers for this type are the highest across the four types because its mainstream adoption requires fundamental paradigm shifts in customer behavior. Whereas higher levels of decentralization imply diverse changes for software developers and content creators, from a customer perspective, higher levels of decentralization imply a profound paradigm shift across multiple dimensions. The following paragraphs discuss the major factors of BDBM-T1 mainstream adoption from a customer-centric perspective.

*Attitude Shift* Whereas high levels of decentralization may be desirable from software developers' and content creators' point of view—mainly because, unlike if playing by the rules of centralized platforms, they can maintain long-term full control over their product or service (Dixon, 2018)—customers may not fnd such decentralization equally appealing. The authors of extant research have often mentioned BDBM-T1's high levels of technological complexity and low levels of usability (e.g., running a Bitcoin node or trading cryptocurrencies using decentralized exchanges, such as Uniswap), which is certainly an important barrier to mainstream adoption (Chen & Bellavitis, 2020; Tumasjan & Beutel, 2018). However, in addition to high levels of usability, using decentralized applications must come with a clear customer value-add. From the software developers' and other creators' point of view, this value-add may be the independence from a centralized platform provider that, over time, could change the rules of cooperation, censor certain applications, and extract higher rents from developers and creators (Dixon, 2018), who, however, feel impelled to stay on the platform due to sunk cost and lock-in effects. From the customers' perspective, the overall user value has to be higher—and not just different—than what centralized providers offer with high levels of customer service. Using completely decentralized peer-to-peer services is, for the vast majority of customers, not an end in itself. For instance, using decentralized insurance products (e.g., Etherisc) effectively requires customers to gain an in-depth understanding of their economic and technological mechanisms. Thus, they would need to shift their mindsets toward highly valuing autonomy, privacy, full control over their own data, freedom from large institutions, and similar factors as intrinsic benefts. Given similar levels of usability and cost, customers would therefore have to value self-sovereignty and decision freedom as ends in themselves to prefer BDBM-T1 over traditional centralized solutions with high levels of support and customer service. This increased intrinsic value of self-sovereignty and decision freedom often goes hand in hand with decreased levels of trust in traditional centralized institutions (e.g., libertarian or similar political views; Lichti & Tumasjan, 2023).

*Learning and Competence Shift* Increased decentralization in BDBM-T1 requires customers to increase a range of competences, be it in the feld of IT and/or the respective product/service domain (e.g., fnance). For instance, whereas in the traditional fnancial system bank counselors advise customers on how to invest their fnancial assets, make transactions, and close a fnancing deal, a BDBM-T1 (e.g., DeFi applications for depositing or lending cryptocurrencies, such as Aave) requires customers to complete these tasks entirely by themselves. Thus, customers need to not only invest additional time and be interested in building the requisite expertise, but must also have the respective education and ability to do so. Of course, fnancial and other counselors could also emerge for BDBM-T1, but their involvement may lower the levels of decentralization due to the required trust in, and reliance on, their advice for customers' decision-making.

*Responsibility and Accountability Shift* Customers have to take on accountability and responsibility if transactions go wrong. Transaction problems can range from technical diffculties and honest human errors to outright fraud. Without central entities providing safety and legal support in this regard, customers need to be willing to take on these risks on their own. Although special insurances for blockchainbased products/services (e.g., crypto wallet insurances) may mitigate these risks, they create additional cost and time investment. If Bitcoin is sent to a wrong address, for example, the transaction cannot be undone.

*BDBM-T2* The challenges for the mainstream adoption of BDBM-T2 are less pronounced than those for BDBM-T1, as BDBM-T2 are tailored toward customers who want to engage with new types of digital assets (e.g., cryptocurrency or nonfungible token [NFT] trading and investing) but want to do so through a trusted centralized infrastructure. Prominent examples are Coinbase (trading and managing crypto assets) and Opensea (trading and managing NFTs). Although BDBM-T2 allow users to engage in nontraditional assets independent of extant centralized institutions (e.g., fat currency products), they do so in a rather traditional way that comprises high usability, security, and accountability. For instance, Coinbase acts as a centralized wallet provider storing customers' cryptocurrencies. Thus, the entry barrier, overall, is lower than for BDBM-T1.

*Attitude Shift* To engage in BDBM-T2, customers will need to see value in owning and transacting new digital assets (e.g., cryptocurrencies), thereby acting outside the traditional fnancial system and its products and services. Thus, similar to BDBM-T1, BDBM-T2 need to offer a clear value-add over and above traditional

fnancial services and products. For instance, in a low interest rate phase, new digital assets could be seen as providing a potentially more proftable alternative. Moreover, in countries with unstable fnancial systems and/or for individuals with limited access to traditional banking services ("unbanked individuals"), using BDBM-T2 offers a clear value proposition. In contexts with stable and accessible banking systems, BDBM-T2 may likely pass through a typical diffusion of innovation cycle (Rogers, 1962). Finally, speculation and trading are, at least today, central affordances of BDBM-T2 that need to be valued as a desirable goal in itself. For instance, Coinbase offers customers the exchange and custody of cryptocurrencies in a centralized manner, i.e., although customers invest in cryptocurrencies (e.g., Bitcoin), the usability and services are similar to established centralized institutions, such as banks or centralized digital platforms (e.g., Facebook).

*Learning and Competence Shift* Most BDBM-T2 are designed to facilitate the onboarding and support of new customers (e.g., centralized exchanges), very similarly to incumbent digital platforms (e.g., GAFA). Thus, from a usability point of view, users face almost no challenges beyond those inherent to all traditional digital business models (e.g., Coinbase and Binance). However, they have to gain knowledge about the digital assets themselves (e.g., cryptocurrencies and cryptocurrency-based index funds), or at least fnd advice to make an informed investment decision. As many of these assets are complicated to study and understand their characteristics, there is a comparatively large knowledge and competence gap that needs to be bridged. Moreover, in the case of personal cryptocurrency key storage (e.g., self-custody in cold wallets), users need to gain additional competences for dealing with new devices and software, which, however, is in most cases optimized for customer onboarding.

*Responsibility and Accountability Shift* As, in BDBM-T2, companies with a centralized infrastructure facilitate customers' transactions (e.g., Coinbase and Binance), customers face almost no increased responsibility or accountability in comparison to traditional digital business models (e.g., in the case of centralized exchanges). However, for self-custody key storage wallets (e.g., self-custody in cold wallets), users need to take over responsibility for safeguarding their own assets and can make no replacement claims in cases of loss.

*BDBM-T3* The challenges for the mainstream adoption of BDBM-T3 mostly concern changes for incumbent companies reengineering their extant IT infrastructure and processes toward DLT-based solutions (e.g., Hyperledger Fabric). Thus, the initial changes fall on the incumbent companies rather than on end customers. For instance, to enable blockchain-based supply chain tracking, peer-to-peer electricity trading, and shared digital health records, incumbent companies need to change their legacy IT systems and business models. If incumbent companies continue to offer their products and services in a traditional way but, due to the blockchainbased infrastructure, with improvements in effciency, transparency, and further value from the customers' point of view, mainstream adoption requires little changes on the regular end customers' side.

*Attitude Shift* For incumbent companies, there may be an increased attitude shift toward more coordination and cooperation when working in blockchain-based frm consortia, such as Corda (e.g., to enable more effcient data sharing). Moreover, when establishing public-permissioned blockchain solutions, companies need to substantially alter their attitude toward more transparency and openness to the public. On the end customer side, customers will often need to change neither their attitudes nor their behaviors, as the changes mostly concern the IT back end, but they will, of course, proft from potentially increased effciency. Small to medium attitude shifts will be required if customers are also affected at the front end (e.g., managing digital identity in healthcare data management), as incumbent companies will focus on delivering end customer-friendly products and services.

*Learning and Competence Shift* Incumbent companies embracing BDBM-T3 (e.g., Hyperledger Fabric) will need to signifcantly invest in learning and developing new competences to build and maintain DLT or similar blockchain-inspired infrastructures. Changing from legacy IT systems and processes as well as established traditional digital business models will thus require substantial investments in learning and competence development. From an end customers' perspective, similarly to the attitude dimension, the required shifts will be small to medium.

*Responsibility and Accountability Shift* Generally, the shifts required on this dimension will not be high for enterprise DLT consortia (e.g., Corda) because they can rely on traditional contractual agreements. For end customers, the shifts depend on the extent of personal involvement, which companies could adjust according to customer preferences (e.g., degree of responsibility for own identity management).

*BDBM-T4* Because the focus of decentralization in these projects is low or nonexistent, the mainstream adoption of BDBM-T4 requires comparatively lower levels of cognitive efforts in terms of attitude, learning, and responsibility shifts. For instance, for mainstream adoption of CBDC (e.g., for an overview of current projects and their status, see CBDC Tracker [https://cbdctracker.org/]), these shifts largely concern the technical infrastructure and legal issues of central and commercial banks, whereas end customers have to submit to far less behavioral attitudinal and behavioral changes. However, critics have amply chided CBDC for its potential lack of privacy and high levels of state control. Thus, for customers, CBDC most likely will lead to much lower levels of privacy and higher levels of state control.

#### **Discussion**

Based on a differentiated view of the term decentralization in the context of blockchain technology, this article set out to analyze to what extent the type and degree of decentralization impacts the mainstream adoption of BDBM. The term has been and continues to be shrouded in ambiguity because different actors in the blockchain technology discourse in both practice and research have used it as a catchword for tremendously different goals. The frequent lack of a clear defnition of the type and extent of decentralization for BDBM, in turn, has hampered the discourse on the chances and challenges of BDBM mainstream adoption in research and practice.

The present analysis yielded four types of BDBM emerging from the differentiation of two types of decentralization goals in blockchain-based business models, namely infrastructural distributedness (i.e., technical dimension of decentralization) and institutional disintermediation (i.e., governance dimension of decentralization). As has become evident, BDBM-T1—aiming at high decentralization on both dimensions—face the highest barriers for mainstream adoption. Not only are there several known technical and regulative issues but, in addition, they require fundamental paradigmatic shifts in customer attitudes and behaviors. Overall, this paradigm shift can be described as moving toward highly elevated levels of self-sovereignty, which necessarily goes hand in hand with increased levels of learning and competence as well as responsibility and accountability demanded from individuals.

From a customers' perspective, there is a clear trade-off between the aspiration of high levels of self-sovereignty implied by high levels of decentralization (i.e., BDBM-T1) and BDBM's ease of use. For instance, the freedom of being in charge of one's own digital assets (e.g., cryptocurrencies) comes with the burden of learning about the digital assets themselves (e.g., risk); acquiring and continuously updating IT competences to be able to interact with the respective interfaces (e.g., decentralized exchanges and wallets); and taking full accountability in cases of failures and errors (e.g., technical failures and loss of private keys). Similarly, purchasing goods via decentralized marketplaces (e.g., OpenBazaar) gives buyers and sellers freedom from large corporations by cutting out the middleman (i.e., there is no intermediary) from transactions. However, doing so comes with the burden of increased effort to make a transaction. For instance, OpenBazaar users need to run the OpenBazaar server and client to participate in the network. Users have no direct support when technical and/or legal problems arise. Instead, users need to tackle issues themselves by using either the website's documentation or consulting the code base, which is accessible as part of an open source project. Moreover, if they want to protect larger payments, they need to identify a moderator and use escrow for transactions. If there are problems after the purchase, buyers must appeal directly to the seller: No central entity performs the role of intermediary. Of course, some of these challenges can be partially addressed (e.g., sellers' negative reviews preventing new customers from buying) but, overall, the trade-off between decentralization and transaction ease will persist because high levels of decentralization necessarily imply high levels of self-sovereignty which, in turn, implies higher levels of competence and responsibility.

Although high levels of customer self-sovereignty may be desirable as an end in themselves (i.e., an intrinsic end), they represent a fundamental paradigm shift away from current digital business models, which are aimed at providing the highest levels of customer service without requiring customers to worry about transactional issues, as described in the case of OpenBazaar (e.g., the GAFA, traditional Fintech, and other digital products and services). To achieve customer mainstream adoption of BDBM-T1, customers will need to signifcantly change their attitudes toward intrinsically valuing self-sovereignty and the attached behaviors, namely increasing competences and taking full accountability for their actions. However, the blockchain technology discourse around decentralization has often been led from a (software) developer's and content creators' point of view, which at least partly explains its participants' enthusiastic view of decentralization. As outlined above, developers and content creators view several aspects of decentralization as highly desirable (e.g., independence from large corporations and platforms, such as the GAFA) and at the same time fully feasible (e.g., following, auditing, and producing necessary code for smart contract applications). Yet to adopt this perspective is to disregard the needs and competences of average regular customers with little or no coding background who want to get a job done. Although suffcient coding competences may possibly become part of standard education skills in the future (e.g., similar to using PCs or driving cars today), current (at least European) educational systems make it more likely that this prospect remains many years, if not decades, away. Moreover, true decentralization will always and necessarily require individuals to adopt high levels of responsibility and accountability, which may not be feasible for people in every life domain.

These issues, however, are much less pronounced for today's BDBM-T2 and BDBM-T3 (and hardly existent for BDBM-T4). Thus, the mainstream adoption of BDBM-T2 and BDBM-T3 may likely be a matter of time and of solving technical and regulatory issues. Both BDBM-T2 and BDBM-T3 are focused less on complete customer self-sovereignty than BDBM-T1 are, and more on decentralizing or disintermediating certain incumbent structures to attain specifc goals, such as increased effciency, transparency, security, and collaboration. However, both BDBM-T2 and BDBM-T3 still comprise certain central elements by design which relieve users of complete self-sovereignty by taking over certain jobs for them (e.g., custodial services or technical support). Importantly, in terms of business model design, BDBM-T2 and BDBM-T3 also allow businesses the possibilities for value capture, in the traditional sense, rather than BDBM-T1. As BDBM-T1 aim at the highest levels of user self-sovereignty, businesses by defnition have limited possibilities for such value capture, as the aim is to enable users to complete most jobs (including support and service) themselves.

#### **Conclusion**

This article set out to analyze the notion of decentralization in BDBM and explore its implications for BDBM's mainstream adoption from a regular customer's perspective. Because those engaging in the blockchain discourse, academic and non-academic alike, have previously used the term decentralization ambiguously and with widely diverging intentions, this analysis has aimed to clarify these different meanings and consolidate them in a basic typological framework. To this end, the present analysis introduced a two-dimensional typology to categorize BDBM in terms of their decentralization focus. As evident, the extant BDBM developers have been pursuing decentralization for extremely different goals, and to radically varying extents, reaching from effcient and transparent data sharing (e.g., enterprise DLT solutions) to individual fnancial independence disintermediating state institutions (e.g., using cryptocurrencies in self-custody), and to completely self-sovereign peer-to-peer organized business models (e.g., DAOs).

Overall, with regard to the relationship between knowledge and technology, this analysis shows that more technology, and especially more decentralization of technology, may require elevated levels of knowledge, competence, and accountability amongst customers, while concurrently reducing specialization and division of labor. Reaching such levels of knowledge, competence, and the intrinsic desire for individual accountability and self-sovereignty on the regular customers' side would require enhanced technological, business, economics, and legal education. Although customers' technological profciencies have generally increased over time (e.g., working with a personal computer is now considered standard skill), products and services have also become easier to use (e.g., software ease of use has been steadily increasing over time). For instance, digital services such as Google maps have made traditional navigation knowledge (using a map and compass) obsolete, while being easy to use for regular customers. However, Google maps can offer enhanced usability because it is a centralized and "closed" service (i.e., it is not open source and cannot be altered or adapted), optimized around ease of use. Customers making decentralized applications based on blockchain technology, on the other hand, require additional knowledge to be able to properly use the services and be responsible for their own usage behavior. Thus, as this analysis has shown, a necessary trade-off exists between ease of use and self-sovereignty. The relationship between knowledge and technology appears accordingly ambivalent, as technology may both increase and decrease the levels of knowledge required, depending on whether products and services are aimed at building centralized business models with low levels of required customer self-sovereignty versus decentralized business models with high levels of required customer self-sovereignty. Future research is needed to further investigate the prerequisites and circumstances under which the mainstream adoption of BDBM in the sense of increased customer self-sovereignty is possible and desirable.

**Acknowledgements** This article is based on the author's presentation on May 4th, 2022 at the 19th Interdisciplinary Symposium on Knowledge and Space at Studio Villa Bosch, Heidelberg, Germany.

#### **References**


**Andranik Tumasjan** is a professor and head of the management and digital transformation research group at the Johannes Gutenberg University Mainz (Germany) and an associate scholar at the Centre for Blockchain Technologies at University College London (UCL CBT). His research focuses on how new digital technologies and trends infuence management and the emergence of new organizational forms, business models, and entrepreneurial opportunities. He studied at the Ludwig Maximilians University of Munich and Nanjing University (China) and received his doctoral degree and a postdoctoral degree in management from the Technical University of Munich. His work has received several national and international awards, including Best Paper Awards from the Academy of Management (AOM), the Hawaii International Conference on System Sciences (HICSS), and the German Academic Association for Business Research (VHB).

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 12 Big Data without Big Brothers: The Potential of Gentle Rule Enforcement**

**Ido Erev, Manal Hreib, and Kinneret Teodorescu**

One of the main goals of laws and regulations is to decrease the frequency of behaviors expected to impair social safety and welfare. These behaviors are defned as violations, and if detected, should be punished. Historically, the main challenge to the design of effective laws and regulations was the diffculty of detecting violations; the low probability of detecting violations undermines the potential beneft to the public good offered by regulatory acts. A common solution to this diffculty involves the use of severe punishments to create deterrence. For example, despite the low probability of actually catching a thief, past enforcers perceived the threat of chopping the thief's hands, or sending them to Australia, as suffcient to reduce thefts.

Becker (1968/2000) shows that under the standard interpretation of rational economic theory, using severe punishments to compensate for insuffcient detection should prove highly effective. However, behavioral research has documented deviations from the rational model that challenge the effectiveness of this compensatory approach. One solution to this problem involves the use of advanced big data and surveillance technologies to increase the probability of detection. However, the use of these technologies is often associated with indirect costs in the form of invading privacy. Unwise use of big data for enforcement can give the enforcers too much power and impinge on basic rights.

In the current chapter, we review recent research that sheds light on the costs and benefts associated with the use of big data technologies to enforce laws and rules. In section "The impact of rare events", we summarize basic research on human sensitivity to low-probability (rare) events. We conclude that before gaining experience people are more sensitive to the magnitude of the punishment, but that

I. Erev (\*) · M. Hreib · K. Teodorescu

Faculty of Data and Decision Sciences, Technion Israel Institute of Technology, Haifa, Israel

<sup>©</sup> The Author(s) 2024

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_12

experience reverses this tendency. The effectiveness of deterrence generated by a threat of severe punishments, therefore, should be short lived. Experienced agents cannot be so easily threatened and are likely to be more sensitive to the probability of detection than to the magnitude of the punishment.

In section "The value of gentle rule enforcement", we highlight the value of gentle rule enforcement. We suggest that severe punishment can be costly for the enforcers themselves, interfering with proper enforcement. Consequently, if the probability of detection can be raised suffciently, gentle enforcement is more effective than severe punishment. In section "Privacy", we demonstrate that in many settings gentle rule enforcement can be performed with minimal invasion of privacy and does not require changes of current laws. When the probability of the detection of the initial violation is suffciently high, gentle enforcement can be performed without collecting data about the behavior of specifc individuals. In many cases, the focus on the location in space can replace the need to impair privacy. In section "Gentle rule enforcement and the law", we consider the legal implications of our analysis.

#### **The Impact of Rare Events**

Experimental studies of human decision-making have revealed contradictory deviations from the prediction of rational economic theory. Kahneman and Tversky (1979) noted that part of the contradictions involves the inconsistent impact of low probability (rare) events. They wrote: "Because people are limited in their ability to comprehend and evaluate extreme probabilities, highly unlikely events are either neglected or overweighted, and the difference between high probability and certainty is either neglected or exaggerated" (Kahneman & Tversky, 1979, p. 283).

#### *The Description-Experience Gap*

The effort to clarify the impact of rare events reveals a large difference between initial decisions made purely based on a description of the incentive structure, and subsequent decisions made largely based on past experiences. The top panel of Table 12.1 summarizes Kahneman and Tversky's (1979) study of the impact of rare events on decisions from description. The results reveal high sensitivity to the rare (low probability) outcomes. For example, most participants preferred a "sure loss of 5" over a "1 in 1000 chance to lose 5000." This pattern appears to suggest that if our goal is to reduce the frequency of a specifc illegal behavior, rare but severe fnes (e.g., a fne of 5000 for 1 in 1000 violations) are likely to be more effective than frequent but low fnes with the same expected penalty (e.g., a fne of 5 with certainty).

However, other studies (Barron & Erev, 2003; Hertwig, Barron, Weber, & Erev, 2004; Plonsky & Teodorescu, 2020a) have subsequently revealed that experience


**Table 12.1** Comparison of studies of decisions from description with and without feedback

*Note.* Source: Design by authors

can reverse the impact of rare outcomes. The bottom panel of Table 12.1 presents one demonstration of this observation: When people face repeated choices between a "sure loss of 1" and "1 in 20 chance to lose 20," they initially tend to prefer the sure loss; after fewer than 5 trials with feedback, however, they change their preference to favor the riskier prospect. Accordingly, the tendency to overweight rare events when considering the initial description is reversed when basing decisions on repeated experiences, leading to under-weighting of rare events in the long run. This pattern is known as the "description-experience gap" (Hertwig & Erev, 2009).

#### *The Reliance on Small Samples Hypothesis and the Intuitive Classifer Explanation*

Hertwig et al. (2004) noted that the tendency to underweight rare events in decisions from experience can be captured by assuming that decision-makers rely on only small samples of their past experiences. To see why reliance on small samples will lead to underweighting of rare events, note that the probability that a small sample will not include events that occur with probability p < 0.5 tends to be larger than 0.5. Specifcally, most samples of size *k* **will not** include a rare event (that occurs with probability p) when the following inequality holds: *P*(no rare event included) = (1 *p*) *k* > .5. This inequality implies that *k* < log(0.5)/log(1-*p*). For example, when *p* = 0.05, *k* < 13.51. That is, when *k* is 13 or smaller, most samples do not include the rare event (Teodorescu, Amir, & Erev, 2013). Therefore, if people draw small samples from the true payoff distributions and choose the option with the higher sample mean, in most cases they will choose as if they ignore the possibility that the rare event can actually occur.

The hypothesis that people rely on small samples underlies the most successful models in a series of choice prediction competitions (Erev, Ert, & Roth, 2010a, b, 2017; Plonsky et al., 2019) and can be used to explain many judgement and decisionmaking phenomena (e.g., Erev & Roth, 2014; Erev, Ert, Plonsky, & Roth, 2023; Fiedler, 2000; Fiedler & Juslin, 2006; Kareev, 2000). Plonsky et al. (2015) demonstrate the descriptive value of this hypothesis can be the product of the fact that it is expected both when the decision-makers try to minimize effort, and when they are highly motivated and use sophisticated computations in an attempt to approximate the optimal strategy.

The effort to minimize effort is likely to trigger reliance on small samples when the sampling process is costly, and the beneft from reliance on large samples is relatively low. This effect is particularly clear in studies that focus on search behavior (Hertwig et al., 2004; Wulff, Mergenthaler-Canseco, & Hertwig, 2018; Ackerman, Douven, Elqayam, & Teodorescu, 2020; Teodorescu, Sang, & Todd, 2018).

When people are motivated to maximize expected return, they are also likely to base each choice on small samples if they have reason to believe that the environment is dynamic (e.g., the probability of gain is determined by a Markov chain). In such cases, one can approximate the optimal strategy by relying on a small sample of the most similar past experiences. The thought experiment presented in Figure 12.1 illustrates this assertion.

It is easy to see that in Figure 12.1's example, the intuition (of intelligent decision-makers) is to base the decision in Trial 16 on only three of the 15 past experiences—those that seem most similar to Trial 16. In this example, similarity is determined by the payoff from Top in the preceding three trials: Trials 4, 8, 12 and 16 are similar, because in all of them the payoff in the preceding three trials was "-1, -1, -1." Examining Plonsky et al.'s (2015) results, one can conclude that the underlying cognitive processes are similar to machine learning classifcation algorithms (like Random Forest, Breiman, 2001) that classify data based on distinct features. In Figure 12.1's thought experiment, intuition uses the feature "the payoff from Top in the last three trials" as a signal to guide the choice in Trial 16.

(a) Task:

In each trial of the current study, you are asked to choose between "Top" and "Bottom", and earn the payoff that appears on the selected key after your choice. The following table summarizes the results of the first 15 trials. What would you select in trial 16?


(b) Implications:

In trial 16, intuition favors "Top" despite the fact that the average payoff from "Top" over all 15 trials is negative (-0.4). This intuition suggests a tendency to respond to a pattern, and implies that only 3 of the 15 trials (Trials 4, 8 and 12) are used to compute the value from "Top" in trial 16.

**Fig. 12.1** A thought experiment. Following Plonsky et al., 2015. Source: Design by authors

Under this "intuitive classifer" (Erev & Marx, 2023) explanation, people are likely to consider wide classes of features as signals and use the feature that provides the best classifcation of the relevant past experiences. One obvious example involves the use of "traffc light color" as a signal to guide driving behaviors. Most drivers use this signal and stop at red lights to avoid accidents and fnes. However, when explicit signals such as a red traffc light are absent, people, in an effort to understand their environment, may rely on many other (sometimes irrelevant) signals to sample subsets of past experiences (e.g., Cohen & Teodorescu, 2022; Plonsky & Teodorescu 2020b). Such signals could be their current mood, or the day of the week. Accordingly, by using the intuitive classifer hypothesis one would predict that even highly motivated people are likely to base their decisions on only a small subset of their previous experiences.

#### **The Value of Gentle Rule Enforcement**

The reliance on small samples hypothesis suggests that: (1) the deterrence created by a rare but severe punishment will not be effective for most of the population that has already gained some experience in comparable situations; (2) when it is easy to frequently detect violations of laws and regulations, even gentle fnes are enough to ensure compliance. For example, if running a red light saves 80 seconds, a frequent fne of 81 seconds should be enough to eliminate this violation, whereas a severe, but rare, 24-hour detention will have little effect in the long run.

In a recent paper, Teodorescu, Plonsky, Ayal, and Barkan (2021) explicitly examined the above predictions in the simple perceptual task described in Figure 12.2. In each study trial, they presented their participants with dots on a divided screen and asked them to report which side contained more dots. Those who reported more dots on one of the sides received a higher reward (10 points vs. 1 point), regardless of the accuracy of their response. Thus, participants could try to increase their earnings by reporting the more proftable side (that with 10 points), even doing so contradicted the evidence. In the frst stage, the researchers did not verify the answer, and reporting the more rewarding 10-points side was always benefcial. In the second stage, they informed the participants that from now on, they would randomly sample and verify answers, meting out fnes for each incorrect response. As a deterrent, they implemented a policy of high enforcement frequency (p = 0.9) with small fnes (−10) for one group, and a policy of low enforcement frequency (p = 0.1) with high fnes (−90) for the other. Notice that the expected value for misreporting was identical in both enforcement policies.

The results revealed that a higher frequency of gentle punishments decreased the rate of violation much more effectively than a lower frequency of more severe punishments. The gap was especially large among particularly delinquent participants (those who tended to commit more violations in the frst, non-enforced stage). Moreover, this trend held steady even when the researchers told the participants how much the fne was in advance but did not reveal the frequency of

**Fig. 12.2** Timeline example of two trials: The frst trial without inspection and the second with inspection under an enforcement policy with severe punishment (fne = −90 points). Source: Reprinted from Frequency of enforcement is more important than the severity of punishment in reducing violation behaviors, by Teodorescu et al. (2021, p. 3). Copyright by authors. Reprinted with permission

enforcement—which simulates many real-life situations. From a practical standpoint, one can conclude that when the inspection rate is low, policymakers should prioritize increasing the frequency of inspections over the severity of punishments.

Moreover, as law enforcers are often reluctant to give very large fnes, when the expected punishment is severe, law enforcement agents tend to let people go with just a warning. Therefore, large fnes could result in a perception of unfairness and consequently reduce the probability of detection (Feess, Schildberg-Hörisch, Schramm, & Wohlschlegel, 2018; Polinsky & Shavell, 2000), which seems to be the key factor in reducing delinquent behavior. Accordingly, these fndings are a strong indicator that "gentle rule enforcement" (Erev, Ingram, Raz, & Shany, 2010c) that includes smaller punishments with higher probability would be more effective in reducing violation rates, especially for high offenders, the target population of any enforcement policy.

In order to clarify the signifcance of this suggestion, it is constructive to note that many substantial violations begin with much lighter breaches. For example, certain cheating efforts, during exams, start with looking around to identify a visible exam form with completed answers. Similarly, certain violent fghts in public areas start with carrying concealed weapons, and threatening others with this weapon. The current logic suggests that enforcers can use gentle rule enforcement to stop the frst stages in these event sequences that, left untouched, might snowball into a serious violation. In contrast, it is often impossible (or too costly) to stop the frst stages with harsh punishments.

In one examination of the value of gentle rule enforcement, Erev et al. (2010c) tried to reduce cheating on college exams. They ran an experiment during the fnal semester exams of undergraduate courses at the Technion. Traditionally, instructions for exam proctors at the Technion included the following points:


As collecting IDs is the frst step to constructing this map, proctors commonly interpreted these instructions to mean that they should prepare the map at the start of the exam. Early map preparation was designed to ensure that it will be possible to detect and severely punish cheaters. However, it distracts the proctors and reduces the probability of early gentle punishment (e.g., warning or moving the suspected student to the frst row). The experiment compared two conditions that differed with respect to the timing of the map's preparation. In the control condition, the proctors were asked to prepare the map at the beginning of the exam (as they had traditionally done prior to the study), and in the experimental condition, the proctors were asked to delay the preparation by 50 minutes, implicitly allowing them to focus on early detection of cheating intentions. Seven undergraduate courses were selected to participate in the study. In all courses, the fnal exam was conducted in two rooms. One room was randomly assigned to the experimental and the second to the control condition. After fnishing the exam, students were asked to complete a brief questionnaire in which they rated the extent to which students cheated in this exam relative to other exams. The results reveal a large and consistent difference between the two conditions. The perceived level of cheating was lower in the experimental condition in all seven comparisons.

Another examination of the value of gentle enforcement, conducted by Schurr, Rodensky, and Erev (2014), was focused on an attempt to increase compliance with safety rules. Foremen in 11 Israeli factories were asked to encourage the use of safety devices by simply telling workers who did not use them to cease their current work and bring the missing safety devices. This gentle but frequent enforcement mechanism replaced a harsh one in which large fnes were occasionally administered by the factories' safety inspectors. The results revealed a quick decrease, from 50% to 10%, in safety rule violations.

To summarize, given people's tendency to rely on small samples of past experiences and the associated sensitivity to enforcement frequency, gentle, yet frequent, rule enforcement seems to be the key to effectively reducing undesired violation behaviors. Although the cost of close monitoring used to be high, recent technological advancements and the increasing usage of AI algorithms enable more effective monitoring with signifcantly reduced costs (e.g., Abaya, Basa, Sy, Abad, & Dadios, 2014; Piza, Welsh, Farrington, & Thomas, 2019; Raaijmakers, 2019).

#### **Privacy**

One of the main risks associated with the use of big data technology for enforcement involves costly invasion of privacy (e.g., Lynch, 2020; Schwartz & Solove, 2011; van Zoonen, 2016). We believe that a gentle rule enforcement policy as discussed above can reduce this risk. Our belief rests on the observation, previously alluded to, that many severe violations start with minor ones. Although identifcation of people committing severe violations can be important, for minor violations we might prefer to prioritize stopping them early on, without the need to identify the offender. As for most small violations it is not vital to identify the offender, it is thus possible to develop sensors that use big data technology to stop the violation escalating without recording Personalized Identifable Information (PII). One example of a successful enforcement of this type involves the use of seat-belt alarm systems (Lie, Krafft, Kullgren, & Tingvall, 2008). These systems create an environment where violations of the law "buckle your seat belt" lead to an unpleasant noise with high probability. These systems capitalize on our sensitivity to the frequent event and are thus highly effective despite the fact that they neither collect information about the individuals violating the law nor infict severe punishments.

Another example involves the use of gentle rule enforcement to reduce cheating in exams, described above. This enforcement was performed without collecting information on the individuals who were asked to move to the frst row. The move to the frst row was effective because it was enforced liberally but served only as a minor punishment (for example, it wasted time), and also because it served as a frequent, implicit warning.

These examples demonstrate that when the detection probability of the frst stage of a sequence of violations is suffciently high, certain warnings can replace both punishment and invasion of privacy. In order to clarify the potential of this observation, consider the use of video surveillance systems to reduce violence in public areas. Previous research (see Welsh & Farrington, 2009) shows that surveillance systems are rather effective in reducing car related crimes, but much less effective in reducing physically harmful forms of violence (e.g., homicides, fghts with injuries, aggravated assaults) in public areas. Under the reliance on small samples hypothesis, this gap in the effectiveness of surveillance cameras refects the probability of detection (Hreib, 2017). When a car is stolen or damaged, the owner is likely to fle a complaint, and the data collected by the surveillance systems signifcantly increases the probability of identifying and punishing the offender. In contrast, currently, violence is likely to be detected only in the case of serious injuries or homicides. Take, for example, cases in which youngsters use a concealed weapon to threaten others. It is natural to assume that this behavior will usually prove effective: The threatened party is likely to understand the message and back down. In such cases, the existence of the surveillance camera is ineffective because the violation will not be detected.

To illustrate this problem, consider a city with 200 public areas that are covered with video surveillance systems. Assume further that all 200 cameras are connected to an operation room, and two operators monitor the 200 screens with the intention of intervening (sending police) when they detect the beginning of a fght. It is natural to assume (and the reliance on small samples hypothesis would lead one to predict) that the operators are likely to focus on the most interesting screen—the one attached to their smartphone. Thus, the probability of detecting violence in real time is very low. Big data technology can solve this problem. For example, developers can create machine-learning algorithms that detect evidence of threats that include concealed weapons and other indications of the beginning of a fght, and immediately send a warning signal. The signal, say a blue light, can appear both on the screen (in the operation room) and on the camera in the public area. The signal on the screen will draw the operator's attention, and the signal on the camera will inform the fghting parties that the police are on their way. Thus, like the seat belt alarm, it reduces the beneft of violating the law and can stop the violation without collecting Personalized Identifable Information (PII).

Similarly, undesired smoking in public areas can be detected via smoke sensors, but instead of identifying the individual offender, an automatic reaction can interrupt the smoker. For example, imagine that each time a sensor detects cigarette smoke in a pub, it automatically turns off all lights within a given radius of the detected smoke (or alternatively, turns off the lights by all other tables, leaving light only on the smoking table). In a similar vein, sensors can detect pedestrians running a red light in crosswalks and provide them with an aversive sound (which will also direct nearby people's attention to the violation). More advanced sensors can be used to detect violations such as littering. Imagine that each time something falls from someone's hands, a nearby speaker announces: "Something has fallen on the foor, please pick it back up."

More generally, we suggest that the solutions to many violations start with the use of local sensors to detect the existence of violations in public spaces. Once a sensor has detected a violation, it can send non-private information about its location while simultaneously creating an immediate automatic reaction that signals to the offenders that their violation has been noticed. Thus, by focusing on the space, it can limit the impairment of privacy and direct patrols to where they will be most effective. We suggest that this type of solution generates gentle enforcement, which we expect to reduce small violations (that can lead to serious violations) in public areas without invading privacy.

#### **Gentle Rule Enforcement and the Law**

The examples presented above demonstrate that the use of technology to facilitate gentle rule enforcement in public areas does not require new legislation. For example, adding blue warning lights to surveillance cameras does not change the information these cameras collect, nor does it change the punishment meted out to individuals found to violate specifc laws. It only directs the attention of the human operator observing multiple screens to a region of interest, consequently increasing the probability of detecting initial violations. At the same time, the blue light at the location itself warns individuals that have begun violating the law that the police are on their way, thus potentially interrupting or preventing more severe violations before they ever occur. We expect these changes to facilitate the enforcement of current laws and regulation in public areas and increase compliance with the law. In addition to reducing severe crimes, we expect them to reduce the necessity of severe punishments.

Yet some violation behaviors occur in private areas (one's car or house), where privacy concerns bar policy-makers from installing sensors linked to automatic responses. In these cases, regulation that forces installation of such sensors in privately owned consumer products can be of help. The most trivial example is the regulation forcing car manufacturers to install sensors that react with an annoying sound when passengers fail to fasten their seat belts (but without reporting this to any central agency). We expect that extending such regulation to additional sensors that detect and react to other dangerous driving behaviors (e.g., driving above the speed limit, changing lanes too frequently, dazzling drivers with strong headlights, etc.) will drastically reduce these violation behaviors. Importantly, in the absence of such regulations, another solution is to incentivize individuals to voluntarily install gentle enforcement devices/apps by, for example, offering discounts on insurance plans to consumers who make use of them.

#### **Summary**

Basic decision research suggests that with experience, people become highly sensitive to the most frequent outcomes and tend to underweight rare outcomes. Therefore, rare severe punishments lose their deterrence in the long run. As such, gentle enforcement with high probability is likely to prove more effective in reducing violation behaviors. Big data technologies in surveillance systems and advanced sensors enable substantial increase in the probability of detecting violations, yet they are criticized for invading privacy. The current analysis suggests that these problems can be addressed by building on the observation that most crimes start with small violation behaviors which can be detected and stopped without collecting Personal Identifying Information (PII). Thus, it is possible to develop big data technologies that gently prevent crime and avoid the Big Brother problem.

**Acknowledgements** This research was supported by two grants from the Israel Science Foundation: Grants 535/17 and 861/22 to Ido Erev, and Grant 2740/20 to Kinneret Teodorescu.

#### **References**


**Ido Erev** (PhD in Psychology from UNC Chapel Hill, 1990) is the ATS' Women's Division Professor (2006) in the Faculty of Data and Decision Sciences in the Technion, and an ex-President of the European Association for Decisions Making. His research clarifes the conditions under which wise incentive systems can solve behavioral and social problems.

**Manal Hreib** (PhD in Behavioral Sciences from Technion, 2017) is the Head of Research, Assessment & Evaluation department at Co-Impact (The Partnership for a Breakthrough in Arab Employment in Israel), and head of Research department at the Negev Coexistence Forum for Civil Equality. Her work mainly focuses on diversity, integration and inclusion of minorities in the Israeli society.

**Kinneret Teodorescu** is an associate professor at the Technion – Israel Institute of Technology. She received her PhD in Behavioral Sciences from the Technion in 2014 and returned to the Technion as faculty in 2016 after a post-doc at Indiana University. Kinneret studies the effect of incentives on long-term behaviors, focusing on experience-based decisions in varied contexts such as search and exploration, dishonesty, and healthy lifestyle. With the aim of improving social welfare, Kinneret strives to identify interventions/policies that can effectively mitigate behavioral problems.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 13 Personal AI to Maximize the Value of Personal Data while Defending Human Rights and Democracy**

#### **Kôiti Hasida**

AI (artifcial intelligence) is fourishing. A centralized AI (CAI) is an AI working on the basis of centralized management of personal data (PD). Its operator controls many individuals' PD and the CAI exploits that PD to intervene in their behaviors. On the other hand, the attention economy (AE) is the social state in which economic activities are driven by the need to attract people's attention. CAIs and AE jointly give rise to digital Leninism (Heilmann, 2016) and surveillance capitalism (Zuboff, 2019), diffusing misinformation and biases and distorting behavior of people. This damages not only democracy (freedom of thought, conscience, speech, and choice) but also other social goods (value creation by PD), as shown in Figure 13.1.

Digital Leninism is autocratic administration using digital technology in which CAI is the major technology ftting autocracy and is utilized to implement the ideology of Lenin rather than that of Marx, Stalin, or Mao. China's social credit system is a typical example. Unlike commercial credit services—such as the Ant Group's Sesame Credit—this national credit system is inescapable for the Chinese people. They are banned from long-distance travel if their credit scores are bad, they may be exposed in electronic billboards if they commit traffc violations, and so forth. The Chinese government has also employed CAIs for face recognition, etc., to oppress ethnic minorities and democratization movements by picking out the target people in Beijing, Shanghai, Xinjiang, Hong Kong, and so on.

Surveillance capitalism makes more massive use of CAIs to monitor and manipulate unaware individuals' behaviors for the sake of commercial benefts. For instance, an algorithm developed by the American retail company Target to predict

© The Author(s) 2024

K. Hasida (\*)

Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan e-mail: hasida.koiti@i.u-tokyo.ac.jp

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_13

**Fig. 13.1** The danger of centralized AI and the attention economy. Source: Design by author

female customers' pregnancy and delivery date from their purchase records successfully identifed a pregnant high school girl and sent her coupons for baby clothes and cribs, all while she was unaware that she was being monitored (Duhigg, 2012). Or another example: The British consulting frm Cambridge Analytica illegally collected 87 million people's data through Facebook's Friend API and allegedly used a CAI in order to manipulate swing voters' voting behaviors in the U.K. Brexit referendum and the U.S. presidential campaign, both in 2016, to support Brexit and Donald Trump, respectively (Confessore, 2018).

Both digital Leninism and surveillance capitalism accompany behavior distortions. Fake news, echo chambers, and flter bubbles have distorted beliefs and behaviors since antiquity, but information technologies—AI technologies in particular—have diversifed and refned these distortions. For instance, deep fake technology may make it impossible for viewers to ascertain the authenticity of video footage.

CAIs and AE thus not only threaten freedom of thought, conscience, speech, and choice of action, but also impede value creation by PD. The threat to freedom entails a threat to democracy, as the former is the foundation of the latter. Value creation by PD is impeded because it is restricted by centralized PD management and biased by attention distortions.

Worse still, it is impossible for humanity to jointly confne CAIs and AE, because they create winners. Namely, centralized PD management and behavior distortions may eventually confer upon some companies and countries huge profts and power. In contrast, international collaboration to avoid nuclear wars and global warming is logically possible, because they do not create winners.

The only way to reduce CAIs is to replace them with another technology creating larger value. Both public and private service providers will voluntarily move from CAI to the alternative technology if doing so is to increase their beneft. Some autocratic governments may insist on CAIs, but such a new technology would prevent CAIs' further global spread.

Personal AI (PAI) can serve as such an alternative technology to displace CAI. Each individual will own his or her PAI, which is exclusively dedicated to him or her, manages all his or her PD, and makes full use of it to intervene in his or her behaviors more deeply and carefully than other technologies such as CAI, assisting his or her living and working activities and behavior changes—by far the best ever personal service. PAIs create much larger value for individual users and thus entail much larger profts for businesses than CAIs, because PAIs fully utilize PD. Economy of scale holds in this context, assuming a mediator, which aggregates knowledge necessary for personal services and provides it to many PAIs.

Due to the full utilization of the users' PD, however, PAIs could be much more dangerous than CAIs. Some strict governance of PAIs and the mediator is indispensable in order to establish their social receptivity so as to displace CAIs.

On the other hand, AE is inevitable, because humanity's bounded rationality renders attention a necessarily scarce resource. Each individual should be able to better manage the authenticity and diversity of information he or she accesses, however, by means of graph documents together with his or her PAI's assistance. As discussed later, graph documents are documents in the form of typed directed graphs with explicit semantic structures to facilitate composition, comprehension, and learning.

The remainder of this paper shall show that decentralized management of PD (DMPD) serves as the common foundation for PAIs and graph documents, which jointly support freedom and democracy while optimizing well-balanced value creation by PD.

#### **Decentralized Management of Personal Data**

#### *Value Maximization*

In most cases, PD's added value is maximized by decentralizing its management (DMPD) to the data-subject individuals, as shown in Figure 13.2.

First, PD's utility is maximized by aggregation to the data subjects. PD's quality as aggregated at the individual data subject, as in Figure 13.3, is larger than PD scattered across many data controllers. Note that the data controllers do not have to share the same ID of each data subject for the sake of this aggregation. If each data controller just provides each data subject with the piece of his or her PD it holds, all his or her PD will be aggregated at his or her hand. Note also that this comprises no privacy concerns, because PD is disclosed to none other than the data subject himself or herself. Once his or her PD is aggregated at his or her hand, he or she can fully utilize it both for himself or herself (primary use) and for many others (secondary use).

**Fig. 13.2** Decentralization maximizing value. Source: Design by author

**Fig. 13.3** Aggregation of PD to the data subject. Source: Design by author

Secondly, security and privacy are ensured by avoiding centralized management of PD. Decentralizing the management of individuals' data prevents massive abuse of many people's PD. In summary, DMPD maximizes the added value of PD by aggregation to raise its utility and decentralization to ensure security and privacy.

Note that centralized PD management is necessary for some public purposes which are not obviously benefcial to the data-subject individuals. Some examples

are taxation, public health (such as contact tracing for pandemics), public security, and criminal investigation. Overwhelmingly more often, however, DMPD creates much larger value than centralized management.

#### *Personal Life Repository*

The author has developed a decentralized personal data store (PDS) called the Personal Life Repository (PLR) to realize DMPD (Hasida, 2013, 2019, 2020). PLR is a software library to embed in personal and corporate apps, as shown in Figure 13.4.

PLR allows the users (individuals and organizations) to share their data (possibly containing personal information, business secrets, etc.) with each other through the PLR cloud. The PLR cloud is a collection of online storages such as Google Drive and OneDrive. DMPD is implemented through end-to-end encryption, by which each data-subject (individual or corporate) has full control over which part of the data to disclose to whom.

PLR apps (apps embedding PLR) can provide stable services to billions of users at no more than the app maintenance costs. The app providers need not pay for the PLR cloud, because PLR users manage their own regions of it. The users' costs are also low if they use nearly free public cloud storages such as Google Drive—which, in most cases, they do.

**Fig. 13.4** Personal Life Repository (PLR). Source: Design by author

**Fig. 13.5** Management and utilization of extracurricular activity data. Source: Design by author

By supporting data sharing among users, PLR supports almost any kind of human-human collaboration, including those supported by enterprise systems and SNSs. Public clouds are used as a PLR cloud by default; they usually permit rather few API calls per unit time, but this is enough to support collaborations among people because each person responds to others much less than once a second on average.

It is often quite easy to develop a PLR app by preparing ontologies and stylesheets. PLR uses ontologies to normalize and coordinate data. The user interface for entering and browsing data validated by ontologies are automatically generated by stylesheets rather than hard coded.

PLR has been employed in a real service as part of school education. Figure 13.5 shows how PLR is used to manage and utilize learners' extracurricular data. More precisely, students in Saitama prefectural high schools enter and accumulate data about their extracurricular activities with a PLR app, disclose the data to the school affairs support system operated by Saitama Prefecture, and their teachers use the data to compose their school recommendations to universities and employers.

The author's research group is currently conducting or preparing several demonstration experiments to use PLR. One such experiment concerns infant medical checkups in Arao City, Kumamoto Prefecture, Japan. The city offce will let parents use a personal app embedding PLR to compose documents (such as interview sheets) about their children and share those documents with the city offce. As the parents own the document data, they can then use it for purposes outside the scope of infant medical checkups. For instance, they may use such data to compose other documents to submit to the city offce, or access services provided by private businesses, including clinics and hospitals.

**Fig. 13.6** Personal AI (PAI). Source: Design by author

#### **Personal AI**

DMPD does not mean that each individual must do anything special. Instead, a personal AI (PAI) is exclusively dedicated to each individual user, manages and utilizes all his or her PD, and thereby intervenes in his or her actions more deeply and carefully than other technologies, including CAIs: It provides the user with the best personal services, such as selecting the best-suited products, personalizing individual services, or assisting behavior changes for better performance in study and business, as shown in Figure 13.6.

As discussed earlier, however, some strict governance of PAIs must be secured. Otherwise, one's PAI may fully exploit one's PD and infict severe damage, either for the beneft of its provider or due to some bugs. If PAIs are to replace CAIs, they should be properly governed so as to beneft all stakeholders, including individual users, providers, and societies.

#### *Purchase Support*

The most proftable application of PAI is purchase support. As shown in Figure 13.7, for instance, suppose you visit a tailor, get measured, and store the measurement data in PLR. The catalog-maker (which we will later call 'knowledge mediator') collects information (measurement, color, material, etc.) about ready-to-wear clothes from apparel makers and compiles a RTW catalog. Your PAI downloads the catalog and recommends some clothes to you by matching your PD against RTWs in the catalog without disclosing the PD to others. If any recommended RTWs appeal, you purchase them. The payment goes to the catalog maker, who transfers it minus their commission. Parts of this commission will be given to the tailer, the PAI provider, and perhaps some others, because they contributed to the catalog maker's commission income.

**Fig. 13.7** Purchase support. Source: Design by author

The commission for this purchase support is huge, because it may apply to all the services directly involving you either as a service recipient or as a service provider. You are a service recipient not only in your private life but also in your work. You are a service provider in your work. The total cash fow involved is more than 110% of GDP on average, because household consumption usually accounts for more than 60% of GDP and the labor share is typically a little more than 50%. In addition, economists estimate the value of non-paid services, such as housekeeping and childcare, to lie around 30% of GDP, making the entire value of the services directly involving individuals more than 140% of GDP. Hence, the total commission is probably about 15% of GDP.

#### *Life Guidance*

Suppose you bought honey from Alibaba and diapers from Amazon, as shown in Figure 13.8. Using your purchase data, your PAI would be able to advise you not to give honey to your baby because honey may cause infant botulism, a deadly illness affecting babies younger than one year. This is a merit of aggregating PD to the data subject (more precisely, to his or her PAI). Amazon provides an "Amazon Anshin Mail" service in Japan ("anshin" means security), with which they would send you this advice via e-mail if you happened to buy both honey and diapers from Amazon, but that fails to work if you bought them from different retailers, which is probably more often the case.

**Fig. 13.8** Living guidance. Source: Design by author

#### *General Behavior Support*

Your PAI may be able to urge you to do something useful even when you are reluctant. For instance, the PAI could persuade you to go to a physical checkup by making a reservation at a clinic, as shown in Figure 13.9. It may also support other behavior changes, such as improving health literacy, daily habits, and so forth.

#### *PAI's Added Value*

How large is PAI's added value in comparison with that of CAI? Figure 13.10 shows how service providers may employ PAIs instead of CAIs as their digital customer contact points. Suppose service providers P1 . . . Pn have used their CAIs as their digital customer contact points, and the knowledge in these CAIs are K1 . . . Kn, respectively. If the service providers use each customer's PAI instead of the CAIs as their digital customer contact point, then the functionality of this PAI will subsume K1 . . . Kn and the PAI will be able to access and aggregate all the types of PD (D1 . . . Dn) which P1 . . . Pn can access, respectively.

The PAI would thus generate much larger value than the CAIs, because it potentially provides as many as (n + 1)2 types of services, compared with only n types of services by P1. .. Pn, as shown in Figure 13.11. For instance, the PAI could recommend products using Amazon's recommendation engine and Alibaba's purchase data.

**Fig. 13.10** PAI as one-stop digital customer contact point. Source: Design by author

**Fig. 13.11** PAIs create much larger value than CAIs. Source: Design by author

#### *Knowledge Mediator*

Some system, which we called a catalog maker and will call a knowledge mediator hereafter, is considered necessary which aggregates various sorts of knowledge and provides the aggregated knowledge to PAIs of many individual users, as shown in Figure 13.12. This is far less redundant and far more effcient than many PAIs of many people aggregating knowledge independently from each other. Note that the knowledge mediator enjoys economy of scale, in the sense that the cost for serving each of PAI users is approximately the cost for the knowledge aggregation divided by their number. So does the PAI provider, of course, because the cost for serving each user is approximately the cost for PAI development divided by the number of the users. The knowledge mediator and the PAI provider (who may or may not be identical) together constitute a platform to intermediate between PAI users and providers of goods and services.

Neither the knowledge mediator nor the PAI provider need centralized PD management because they need not access any PAI user's PD in order to serve him or her. As part of knowledge aggregation and PAI development, they may have to collect and analyze some (not all) PAI users' PD to acquire general knowledge for personalization (knowledge about what types of goods and services ft what types of users, among others). Yet this does not qualify as centralized PD management, because this general knowledge identifes no particular user.

Although the knowledge mediator and the PAI provider do not directly intervene with any individual user, they must be somehow governed so as to maximize the merit while controlling risks of PAI to the user and the society. Later discussed will be a decentralized governance of PAI to this end.

#### *Displacement of CAIs*

As discussed before, global collaboration to reduce CAIs is impossible, because CAIs—unlike nuclear wars—will create winners. As shown in Figure 13.13, however, it is probably possible to let service providers (both public and private) voluntarily shift from CAIs to PAIs because PAIs offer more advantages. If PAIs spread to some extent, then so does DMPD, because the former is based on the latter. DPMD enables decentralized governance of PAI as not only government agencies but also research institutes, universities, private companies, NPOs, etc. could easily

**Fig. 13.13** PAIs displacing CAIs. Source: Design by author

collect personal data and check PAIs' behaviors for the sake of value cocreation balanced among people, businesses, and societies. This would improve PAI's social acceptability. As this cycle, illustrated in Figure 13.13, turns, more service providers employ PAIs instead of CAIs.W

#### **Human-AI Interaction**

PAI may be implemented anytime soon, possibly based on LLMs (Large Language Models) such as GPT. A mediators' knowledge aggregation could be the training of some LLM, and each individual user's PAI could download or remote-access and use that model in services to the user.

The interaction between the human user and such an AI (not only PAI) typically communicates natural-language plain-text data, but this interaction will be more effcient if more semantically explicit data are used here instead, where 'semantically explicit' means that the mapping between the data and their meanings is easy. For instance, Microsoft Bing can present search results in the form of tables and charts, which are easier for users to comprehend than plain texts. On the other hand, LLMs generate program codes better than natural-language texts, because programming languages are formal languages, which encode semantics more explicitly than natural languages do.

The human-AI interaction should be optimized by communicating the most semantically explicit data for both people and AI. The author considers graph documents (Hasida, 2016, 2017) are such data. Graph documents are documents in the form of diagrams or graphs with explicit semantic structures. Figure 13.14 shows a graph document explaining why graph documents should replace traditional text documents.

Graph documents are labelled directed graphs validated by some ontologies. Nodes in these graphs are instances of classes defned in the ontologies and contain basic content such as text, image, and video normally corresponding to simple sentences or noun phrases. Links therein are triplets which are instances of properties in the ontologies and encode semantic relationships between their end nodes. These relations are typically discourse relations, as in Figure 13.14.

We consider people and AI (possibly PAI) should interact by collaboratively composing graph documents as in Figure 13.15, because graph documents are probably the most semantically explicit data for both people and AI. In fact, graph documents are easier than text documents for people to compose, as Zhang (2020) (a master's thesis at the author's lab) demonstrated that collaborative composition of graph documents is more productive than collaborative composition of text documents. Graph documents, like program codes, are considered also more tractable for AI than text documents.

The graph documents in Figure 13.15 are stored in PLR. This is both to safeguard the documents and to utilize them to develop and govern (improve) AIs, as discussed later.

**Fig. 13.14** A graph document. Source: Design by author

**Fig. 13.15** Human-AI interaction via graph documents. Source: Design by author

The composition of graphs (graph documents, argument maps, concept maps, mind maps, etc.) improves critical-thinking (CT) skills (Twardy, 2004; Álvarez Ortiz, 2007; Barta et al., 2022). As argument mapping improves CT better than concept mapping and mind mapping, graph documents are in this respect probably more effective than the latter two—unlike them, argument maps and graph documents are both typed by ontologies (and are hence semantically explicit). As I show in Figure 13.15, however, argument maps cannot be used for general human-AI interaction because the ontology behind argument maps is too small to address general document content.

Graph documents are thus probably the best sort of data to mediate human-AI interaction. As a matter of course, however, various other sorts of data (tables, charts, etc.) may be incorporated or integrated in graph documents in order to improve semantic explicitness.

The author expects graph documents not only to enhance society-wide productivity, but also to protect and strengthen democracy in at least two other respects: First, their semantic explicitness and the CT improvement of the general public would curb misinformation and reduce biases. Second, graph documents could mitigate wealth disparity, as the CT gain tends to be larger for people with low CT skills.

#### **Decentralized Governance**

Figure 13.16 depicts, among other aspects, the decentralized governance of PAI and other personal services. Not only government agencies but also other organizations can monitor and audit behaviors of personal services by analyzing PD collected from the individual service users via mediators, in order to maximize those services' added value while balancing the value distribution among individuals, businesses, and global/local societies. It is vital that multiple auditors check services in parallel, and that they monitor one another by checking each other's analysis results, thus establishing and maintaining their social trust: A PD-oriented decentralized system for governing personal services.

**Fig. 13.16** Open citizen science. Source: Design by author

The service auditors (and also designers) in Figure 13.16 require not only PD generated by individual users, but also PD generated by services in order to analyze the interaction between them. A regulation is therefore necessary to guarantee some data portability encompassing the PD generated by the services, which is stronger than the data portability in the GDPR.

At any rate, DMPD enables a decentralized system for statistical analysis of many people's PD. This system, open citizen science, is useful not only for development and governance (improvement) of services including PAIs, but also for many other purposes encompassing policy making, public health, machine learning, medical science, political science, psychology, sociology, and so forth. In this connection, note that some mediators in Figure 13.16 are both data mediators (providing service designers/auditors with data-analysis results) and knowledge mediators (providing PAIs with knowledge, which is some sort of data-analysis result).

#### **Conclusion**

PLR supports the decentralized management of PD (DMPD) of up to billions of individuals at extremely low cost together with high security and privacy. Accordingly, it will help both PAIs and graph documents spread worldwide. DMPD also allows individuals to provide their aggregated PD for the sake of decentralized governance of PAIs and other personal services. PAIs will displace CAIs because this governance will allow them to far more greatly beneft all stakeholders. On the other hand, graph documents facilitate verifcation and enhance the diversity of information users can access, securing freedom of thought, conscience, speech, and choice based on scientifc grounds. In summary, DMPD supports freedom, democracy, and well-balanced value co-creation, as depicted in Figure 13.17.

There are a few issues to address in order to implement this agenda. First, service providers should understand that PAIs are more proftable than CAIs. If this is the case, then PAIs and DMPD will jointly spread, establishing decentralized governance of PAIs, improving their social receptivity, and mostly displacing CAIs. Second, graph documents should also spread together with DMPD, as AI providers could more easily understand their commercial merit than the merit of DMPD. Lastly, some security technologies—such as digital signatures—are necessary to jointly secure the authenticity of information.

**Fig. 13.17** Decentralized management of personal data (DMPD) supports freedom, democracy, and value co-creation. Source: Design by author

#### **References**


**Kôiti Hasida** fnished his doctoral study at the Graduate School of Science, the University of Tokyo in 1986, obtaining the degree of Doctor of Science. He was affliated with Electrotechnical Laboratory (ETL) for 1986-2001 (seconded to the Institute for New Generation Computer Technology (ICOT) for 1988-1992) and with National Institute of Advanced Industrial Science and Technology (AIST) for 2001-2013. He has been at the University of Tokyo since 2013, and concurrently at RIKEN Center for Advanced Intelligence Project since 2017. His research themes encompass natural-language processing, artifcial intelligence, and cognitive science, among others. He served as President of the Association for Natural Language Processing and President of the Japanese Cognitive Science Society. He has proposed technologies and business models for value creation through decentralized management of personal data, and is promoting the spread of these models in collaboration with public and private sectors.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **Chapter 14 How Digital Geographies Render Value: Geofences, the Blockchain, and the Possibilities of Slow Alternatives**

**Jeremy Crampton**

## **Digital Geography Renderings**

This chapter examines how digital geographies can be mobilized to create, capture, and extract innovative forms of value that enable and deepen (post)neoliberal forms of urban growth. The main argument is that digital geographies are used to create new urban growth markets through the production of different forms of value. Specifcally, I focus on two examples of digital geography and the forms of value that they render:


Both geofences/geoframing and cryptocurrency on the blockchain are specifc instances of new markets, and, I would suggest, intersect with the concerns of digital geographers. Yet we have not talked much about how digital geographies are enrolled in the formation of new markets, despite the increasing interest in fnancialization and fntech. To some extent this represents the youth of digital geographies as a subdiscipline. It was only in 2016 that a specifc "digital turn" was identifed in geography (Ash, Kitchin, & Leszczynski, 2016) with a key organizing framework for dealing with digital geography's materiality appearing six years later (Zook & McCanless, 2022). It is time for digital geographers and others interested in digital urbanism to understand these new markets and how they operate. What I aim to

© The Author(s) 2024

J. Crampton (\*)

Department of Geography, George Washington University, Washington, DC, USA e-mail: jeremy.crampton@gwu.edu

J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9\_14

do here is not so much to interpret them on their own terms, that is, what they may claim about themselves, but to offer a critique or problematization that creates a different perspective, a slight turning or angle of view. The purpose to provide ground for an interpretation that is situated in two related lines of thought regarding digital urbanism today; that is, the *rentier* and the *rendering*.

The rentier model of digital urbanism is marked by the increasing privatization of formerly public spaces and institutions, or what planners call privately owned public space (POPS) (Kayden, 2000). These privatized spaces often have the appearance of being public spaces such as gardens, fountains and public-like squares, but are privately owned and controlled (Minton, 2016). As applied to digital urbanism, the most fundamental of these is the Internet itself. Although it was developed by public agencies within the academic-military nexus, it was privatized in 1995, which then led to the dot-com boom and bust of 2000 (Tarnoff, 2022). Such privatization allows economic relations to be established in which value (usually monetary) can be extracted through the rentier-tenant relation or its digital economy equivalent. Sadowski for example has proposed that corporate technology platforms are increasingly interdigitated with urban infrastructures where they can now act as rentiers (Sadowski, 2020). On this model rentiers do not produce value, or innovate new processes or services, but merely sit and collect takings (fees, subscriptions, and other payments). Rentiers derive their rents because they hold exclusive access to goods and services. Internet service providers (ISPs) such as Verizon or British Telecom for example, can rent out their modems to subscribers who pay them fees to access the Internet. In doing so, ISPs do not innovate or act as entrepreneurs but sell access. Rentier economics is therefore characterized by "having rather than doing," and digital platform urbanism constitutes one of the main ways it operates as a form of rentier capitalism and more specifcally platform rents (Christophers, 2020).

Using a threefold typology of platform urbanism, acting concurrently to one another, Sadowski identifes three interdigitated relations between platforms and the urban. These are (1) the operation of platforms to provide *oversight* of city governance; (2) to *operate* city services; and (3) their *ownership* or sovereignty of city spaces (Sadowski, 2020, 2021). Although the frst two enumerated stages are by now increasingly familiar, involving as they do the installation of smart sensors and surveillance devices (traffc cams, air pollution monitors and so on) in the frst case (that is, the now familiar smart city), and urban dashboards, urban analytics, and the corporatization of economic and social interactions on platforms in the second case (that is, platform urbanism), it is the third or ownership phase that is most relevant for our discussion because it focuses squarely on the rentier. A key point concerning such ownership is that it is not just about portfolio diversifcation (investments seeking a return), but about governance through control:

The ownership of territory—in the sense not just of constructing and managing a building, but also of the provision of infrastructure and governance—grants technology capital even greater *dominion* over and data about people, places and processes in the city. (Sadowski, 2021, p. 1737, emphasis added)

Although I concur with this analysis (Sadowski provides a number of illustrative examples), it is also possible to push this argument to explore how specifc forms of digital geographies work to create new growth markets and new forms of value beyond the monetary. To do so, I utilize and contribute to the classic theory of the urban growth machine, now recast as the digital growth machine, to pick out new digital *renderings* of the city, using a term introduced by Rosen and Alvarez-León (2022). Renderings are where a more explicitly digital geographical process can be discerned that operate and mobilize rentier capitalism.

Although Rosen and Alvarez-León (2022) only incidentally refer to the term "rendering" it is worth noting its incredible richness and complexity. "Render" is a verb and a noun with a long etymology that traces back to *re-* (prefx) + *dare* (to give). To render is to give (back), to give in exchange, to produce, to give up, and to represent or portray. In law it means to convey in the sense of yielding property, or a payment, in fnance there is a sense of rendering accounts, and in computing there is a sense of rendering or drawing a scene or image. Throughout these defnitions there is a strong sense of something owed or paid out, as well as a representation, often visual in nature. "Renter" and "render" are etymologically connected; to rent and to rend both share senses of giving (back) or giving up (compare surrender, to give oneself up). Finally, the Latin root word *dare* (to give) is also the etymology of the word datum (plural data), a useful reminder that what is given and taken in digital geography rents and renderings are data.

In other words, a rendering is a form of data representation that can be extracted as rent. Notably when we speak of rent we often have monetary value in mind, but as I hope to show below, other forms of value are also possible, especially as forms of human subjectifcation.

Rosen and Alvarez-León (2022) emphasize two points; frst, urban elites capture decision-making and control over urban governance through renderings; and second, that despite seeking to be positioned as digital, these processes depend upon land, or what Sadowski (2021) calls *territory*. As Rosen and Alvarez-León (2022, p. 14) note:

Land remains the foundation of urban growth possibilities—even as it is transformed via digital means. Despite the increasingly digitized character of the contemporary economy, where the technology industry coordinates with urban elites to advance digitally oriented capital accumulation and consumptive possibilities, growth is still predicated on spatial relationships and expressions, where land remains a common and key asset.

What the digital growth machine logic reveals is the emphasis on the creation of new markets to pursue and proft from growth, not only from capital accumulation, but of other forms of value that derive from digital geographic renderings.

To explore these variant forms of value, I discuss two digital geography renderings. I argue that these forms produce value through the production of specifc *subjects* and a politic of *exit* from traditional geopolitical systems.

#### **Geofences and Geoframing: The Production of Subjects**

A geofence is a virtual boundary. A geofence or reverse location search is the search of a database covering a specifc geographical area (either stationary or mobile) for a specifc time. It is a "reverse" search in the sense that unlike a typical search to fnd a known suspect's geolocational data, it begins with a known location and attempts to identify individuals or suspects. Courts have described geofences as a net that is thrown over an area, usually for devices (e.g., smartphones) that may have been inside the geofenced area, as defned by a bounding box of latitude and longitude coordinates (see Fig. 14.1). Everyone who entered that bounding box is problematized as a potential risky subject or in the case of commercial geofencing as a person of interest to capital.

Geofences have been widely used in the advertising and geotargeting industry as a more granular form of customer characterization to improve on classic geodemographics. In the latter, areas such as zip or post codes are given profles according to the types of people who may live there (e.g., "upwardly mobile young couples" or "urban gentrifers"). These profles are derived from census data, customer surveys, point of sale data and so on. With the advent of mobile phones, advertisers can dramatically improve on geodemographics in two ways: the area of interest can be updated dynamically, and they can access individual customer profles. When an individual enters a geofenced area, messages or promotions can be delivered, their e-scooter may slow down or even halt, or their route may be recorded and saved to a database and made available to law enforcement, or for targeting subsequently by a political campaign. Geoframing uses this historical data (e.g., a store could access all the devices that were nearby over the last few months) to identify the owner of the device and their home address, and to continue sending advertising, either to the mobile device, or to the home address. Third party data brokers such as SafeGraph, Acxiom, and L2 access, compile and sell these records in a largely unregulated marketplace, with scant protection of these data from re-identifcation (if anonymized), or data breaches.

One of the most powerful features of such a search—so powerful in fact that it shocked the US Supreme Court into requiring a warrant—is that the search can take place retroactively, or as the justices put it, "[geofences] give the Government near perfect surveillance and allow it to travel back in time" to any place on earth and look inside everyone's phone (*Carpenter v. United States*, 2018, p. 2). Because it is a search of a database of people's phones, it is the opposite of a targeted form of surveillance that seeks to examine a specifc subject's property or dwelling-place; it will look at everyone, whether guilty or innocent, who entered the geofenced area.

Geofences often use maps, GIS and other forms of geolocational renderings such as bounding boxes to operate. Geofences can often seem to be quite targeted, but if they fall over a densely populated or well-travelled highway the search can be quite expansive. In a case in Chicago involving the theft and transport of pharmaceuticals, for instance, law enforcement asked for three geofences, each one covering over 31,000 square meters, or more than 330,000 square feet. As the court noted, this is

**Fig. 14.1** Map of the US Capitol provided by the FBI in the case against Jared Adams aka "jokerschild1994". Geofenced area indicated by dashed line. Reprinted from "Jared Adams Statement of Facts", George Washington University Program on Extremism (FBI, 2021, p. 4). Copyright by Federal Bureau of Investigation 2021. Reprinted with permission

only the surface area; there were multiple commercial buildings, a multi-story residential building, and a gym within the geofence. In another case in Minneapolis a geofence search had the potential to gather data on "tens of thousands" of people (Webster, 2019). It is this sweeping and exhaustive search capability that led the US

Supreme Court to strike down the conviction of Carpenter based on a lack of warrant for his cell tower data.

However, the ruling provided only a brief respite as law enforcement has now turned to purchasing or otherwise acquiring location data directly from private vendors such as Google and Amazon, or from third party data brokers. In "real-time bidding" for example, a web-page user's data is shared with data brokers and adtech companies hundreds of times a day, including the user's internet protocol (IP) address and location data (Wodinsky, 2022).

Additionally, GPS data are much more locationally specifc than cell tower data; while the latter may only narrow down to a few city blocks, GPS can often be as precise as 5 m, or the difference between being inside a building or not. A dramatic example of the importance of this level of precision occurred during the illegal storming of the US Capitol Building on January 6, 2021, which occurred in the immediate aftermath of Donald Trump's presidential election loss to Joe Biden. During this event, hundreds of Trump supporters forced their way into the government building where the certifcation of the election results was occurring, forcing the rapid evacuation of members of Congress and the then Vice President Mike Pence. In some of the charges against suspects, the FBI have cited geofence data to show that someone was inside the Capitol Building (criminal trespass and obstructing Congress) as opposed to standing outside it (not a crime of trespass). The difference may be only a matter of feet, but the consequences are very different: obstructing Congress is a felony and carries up to a 20-year sentence.

As can be seen in Figure 14.1, one suspect, a man called Jared Adams aka "jokerschild1994" had his location recorded by Google's "blue dot" display radius symbology to show where Google believes the person (or their device) is located with 68% certainty. Using these data the FBI was able to secure a conviction of Adams (FBI, 2021).

An initial review of bibliometric databases indicates that geographers have not yet engaged with the social, political or privacy implications of geofences (for reviews in the legal and transportation sectors see Amster & Diehl, 2022; Moran, 2021). Yet such precise locational information that promises to problematize individuals as risky subjects or persons of interest is largely unregulated and is left to the corporate policies and incentives of the companies concerned. This gives companies such as Google and thousands of data brokers tremendous power and at the same time a lack of accountability.

The rentier model of the economy affords an opportunity to understand something of a shift from the classic competition-driven marketplace, where more effcient innovations drive down costs (e.g., through automation) and increase productivity. As a number of writers have pointed out, growth (including innovation) in western democracies has slowed if not halted (Gordon, 2016), but this does not mean that the production of value by other means has similarly halted. Indeed, geofences and their production of actionable subjectivities whether as potential "dangerous individuals" who must be identifed and governed (Foucault, 1978) or as persons of interest to corporate entities and data brokers, clearly produce value in the rentier economy. It is also perhaps not even correct to say that innovation is lacking (assuming that innovation is always tied to the production of value) but by slightly turning the question of innovation we can postulate that a different form of innovation is at stake; one that is extractive and exploitative, or what we might call toxic innovation. Geofences have created a new market in the production of human subjectivities based on geolocational data. I will return to this distinction below in my discussion of an alternative form of responsible innovation.

## **Leaving Traditional and Constructing New Territorial Systems: Cryptocurrency, the Metaverse, and NFTs on the Blockchain**

The startling rise and demise of cryptocurrency over the last decade and a half has so far attracted little attention in geography or geo-fntech. With few exceptions (Rodima-Taylor, 2021; Zook & McCanless, 2022) digital geographers and those working on the technological and geographical have yet to contribute substantially to our understanding of the blockchain and cryptocurrencies. Yet at one point cryptocurrencies were worth over three trillion dollars (on paper) with two thirds of that value being wiped out in the so-called "crypto winter" of 2022 (named after the AI Winter of the 1990s when interest in AI declined sharply). The blockchain has also been invoked as the ultimate backstop for a wide range of information technology and radical new forms of political economy such as longtermism and effective altruism (EA) that have proven popular in the digital tech industry. The question therefore arises how best to grapple with geographical interests at play in the crypto-blockchain sector, not least its political and economic geographies.

In this chapter I approach the blockchain, cryptocurrencies and non-fungible tokens (NFTs) as digital geographic renderings that produce new imaginaries of political geography: a new politics of exit. While this exit may involve a literal exit from planet earth to colonies on the moon or Mars and beyond as envisaged by Elon Musk, or an exit from landed territories such as sea steading, more typically the politics of exit is from the fnancial sector and more ambitiously from the state or even in some formulations from democracy itself. For some blockchain enthusiasts exit from the state is achieved by conceiving of nation-states as "startups" or "cloud countries" (Srinivasan, 2022) wherein a new "network state" is envisaged that will connect people across different geographies. Such network states are imagined by Srinivasan as self-governed, can act collectively, are on the blockchain, have a strong founding leader fgure, and have diplomatic recognition of its physical territories, among other attributes (Srinivasan, 2022). For example, crypto-investors attempted to buy an island in Fiji—"a crypto-paradise" promised the advertising using 10,000 NFTs to buy plots of land. Although it quickly folded due to lack of investment (Butler, 2022), it is only one of numerous attempts to put territories, properties and real estate on the blockchain. According to one of its leading proponents "the point is that a network state is *not* a purely digital thing. It has a substantial physical component" (Srinivasan, 2022, p. 224, original emphasis).

If it seems novel that states verify their assets and values on the blockchain, it should be born in mind that they still bear all the hallmarks of fnancial speculative assets which are expected to yield a return (i.e., rent). This is especially true of cryptocurrencies, which despite their name do not typically operate as such—they can typically be used only to buy other cryptocurrencies or NFTs (car manufacturer Tesla ended a three-month experiment with Bitcoin payments in May 2021). People buy cryptocurrencies because they speculate that their price will rise. They make these speculations in the knowledge that cryptocurrencies are like fnancial securities, and they are cryptographically verifed on the block chain. True, the value of a cryptocurrency may decline rather than increase, but the same is true of all assets. The key point is that they are not secured via regulation or fnancial institutions but by means of exit.

These kinds of activities represent new, almost unlimited spaces for capital to be invested, but despite their novelty are clearly not so different from previous rounds of value creation and extraction that characterizes the digital growth machine: namely rent-seeking assets enabled through privatization and monopoly control. It is also worth clarifying that as an innovation the crypto-blockchain is primarily an extractive one rather than one that creates value. As Christophers observes "[r]entierism is fundamentally about securing, protecting and sweating scarce assets" (2020, p. 90). On this model, the goal is to make crypto (and its infrastructure such as the internet) a scarce asset requiring a buy-in.1

In addition to purchasing physical land, digital real estate investors have bought virtual plots of land. It is here we see most clearly how digital geography renderings are enrolled in the growth machine, often via the mechanism of NFTs. These virtual spaces are often dubbed the metaverse, although that term is lacking in clarity, and can include virtual reality (VR) games, augmented reality, network states, and web3. In the next section I want to unpack some of these confusing and nebulous terms, beginning with one of the more spectacularly unsuccessful examples of exit, NFTs. However, I want to emphasize that a lot of this constellation of terms and concepts are interlocking, and that there are other areas, such as digital twins, that have been more successful.

Metaverse virtual spaces, or "lands," are bought with cryptocurrency (typically Ethereum) through exchange platforms such as Opensea and WeMeta. The latter currently trades seven metaverse economies, including The Sandbox, Decentraland, NFT Worlds, and four much smaller ones (the metaverse market suffered a crash at

<sup>1</sup>There is currently legal and juridical uncertainty whether cryptocurrencies are more like assets or securities. In the USA, both the Securities Exchange Commission (SEC) and the Commodities Futures Trading Commission (CFTC) have made claims about legislative jurisdiction. In June 2022 a bill was sponsored in the US by Senators Lummis and Gilliland to regulate cryptocurrencies in the more crypto-friendly CFTC, positioning crypto more akin to assets than securities. Crypto lobbyists praised the bill (Newmyer, 2022), while the SEC has pursued a more vigorous investigatory path (del Castillo, 2022).

about the same time as the crypto-winter in March 2022). Others abound with names like EveryRealm, SuperWorld and Legacy, "an NFT-powered recreation of London" (The Economist, 2022). Land on these platforms can be bought and sold. In 2021 virtual real estate investor Republic Realm bought a patch of land in Decentraland for more than US\$900,000 and land in The Sandbox for US\$4.3m, and has investments in 23 metaverse platforms (Howcroft, 2021; The Economist, 2022). The auction house Sotheby's, which has been involved in multiple NFT auctions, has duplicated a model of their London offces in the metaverse to which they control access.

Perhaps the closest realization of land and location purchases on the blockchain is Earth2.io. Founded in late 2020, it is positioned as a massive digital game, the frst phase of which is purchasing and trading real-world (earth-1) locations and claiming ownership over them (e.g., planting an American fag over the Sydney Opera House). Land can be purchased as an NFT from a map (powered by MapBox) as 10 m2 tiles, (5.1 trillion tiles, of which 50 billion are purchasable), has improvement fees, income tax and so on. According to the guide its main purpose is to create a whole virtual reality game, but as of the end of 2022, the focus is entirely on making a proft through land trades and might best be described as a geographical "frontend" to give life to NFTs. Land is divided into a limited number of premium Class 1 tiles, and greater numbers of less expensive class 2 and 3 tiles. Looking past some of the Borges-like claims ("a 1:1 map of the entire earth . . .") we still might be forgiven for seeing this only as a bitcoin trading scheme, but its choice of implementation is still of interest geographically.

The initially stated purpose of the blockchain was to solve a problem with digital currencies; namely how could it be verifed that a digital monetary asset had been spent, without using a trusted third party such as a bank or fnancial clearing house a problem known as double-spending. The answer—Bitcoin—was provided in a paper by Satoshi Nakamoto, a person or persons still unknown (Nakamoto, 2008). Nakamoto's goal to operate outside the banking system made the problem very diffcult. Banks and totally digital payment systems such as PayPal (established 10 years before Bitcoin in 1998) had to solve double-spending by using a trusted third party, and therefore centralizing control, trust and point of failure. Nakamoto's goal was to exit from this centralized system and to circumvent the need for trust altogether by developing the blockchain—a cryptographically verifed ledger or database that could record and verify all transactions. Additionally, only valid transactions can be recorded, a process known as proof-of-work, which in the case of Bitcoin and subsequent cryptocurrencies meant computationally solving an arbitrary mathematical puzzle, commonly known as mining. Tremendous computational resources are required to solve these abstract puzzles, none of which are real-world problems, giving rise to shortages of computer parts (especially GPUs) causing tremendous price infation for computer chips, and negative environmental impacts from energy consumption and the carbon footprint of the mining farms. Some crypto-advocates such as the former WeWork CEO Adam Neumann have proposed using cryptocurrencies to fght climate change, but these typically rely on the largely unproven concept of carbon credits. China banned crypto-mining and trading in September 2021 which partially alleviated GPU shortages in order to maintain central control over the banking sector and reserve power assets for other activities. More recently the industry (including Ethereum which developed the smart contract) has firted with proof-of-stake consensus, which uses far less energy since it is not based on mining—however, it completely removes the original decentralized mechanism since it relies on who is invested with valuable coins (either total worth or some other value captured in an on-chain census). It would also signifcantly "un-level" the playing feld that crypto is meant to play on, and concentrate wealth and power in an oligarchic elite. A "stake" is after all an item of value, and capital will not be just allowed to lie around, but like an accretion disk around a massive black hole will fall swiftly into the orbit of existing wealth.

It is this form of central, state control that the blockchain was built to supersede, to provide in other words an "exit." The notion of exit has a convoluted history, invoking a gamut of fgures from the political far right, libertarians and Silicon Valley investors such as Peter Thiel (co-founder of PayPal). Whether these ideas deserve to be taken seriously is not quite the point; the fact is that these imaginaries are having real-world effects, and as we have seen lie at the heart of the blockchain/ crypto-currency and NFT project. Collectively these and associated projects of decentralized fnance (DeFi) are known as "Web3" following earlier iterations of the web and the Internet. While the precise defnition of Web3 remains amorphous—and for some unrealizable except as a performative utterance attempting but failing to bring into being new realities—for our purposes it has already produced (i.e., rendered) value, namely the politics of exit. As described recently by Smith and Burrows (2021) exit is constituted by a form of warmed-over neoliberalism and techno-libertarianism. Its features include most of those identifed by Srinavasan (2022) the former Chief Technology Offcer of the cryptocurrency exchange CoinBase for the formation of his network state: freedom over democracy, decentralization, a strong leader fgure or sovereign, verifcation via the blockchain, smart contracts that create consent of the governed (rather than for example trust or lazy patriotism) and "diplomatic recognition" or in Srinivasan's terms "clout" or power (Srinivasan, 2022, p. 228). Smith and Burrows (2021) trace the obsession with exit to the distinction made by Hirschman in 1970 laying out the different options for governance under conditions of decline; exit, voice or loyalty. The main options of exit (e.g., emigration, or exiting a market relation) and voice (e.g., protest or voting) are intercut by loyalty (e.g., patriotism). These are not mutually exclusive categories; in pursuit of exit from "democracy" for example, protest may be necessary. This admixture would be one way of reading the January 6 insurrection in the United States.

The geographical ramifcations of the blockchain, decentralization, network states and exit are clearly enormous and I cannot cover them all here. It is worth highlighting some pressing questions however. Who can participate and who is excluded—how are its borders managed? Is access to value on the blockchain equal, or is it concentrated, and to what extent is the blockchain truly decentralized or oligarchic? How does a network state throw around its weight or resolve confict? Can exit really be achieved and if not what are the intermediate geopolitical

confgurations? If a state is no longer predicated on a shared territory, but some form of "cloud country," what forms of geopolitical analysis are appropriate to understand it? And perhaps most signifcant at the moment, what are the material, realworld effects of actually existing exit, especially on inequalities? Although we may not be able to answer these questions yet, I have begun to suggest in this chapter that the politics of exit can be understood through the lens of the digital urban growth machine. Exit on this view is a working example of yet more (post)neoliberal growth, creating new markets as the new "digital fx" for capital. In other words, the metaverse and web3 are neo-libertarian forms of rentier capitalism.

In the remainder of the chapter, I explore some alternatives to growth that do not presume the need for growth but rather slowness, care and repair as values, as well as other forms of exit such as exit to community.

#### **A Slow Data Economy**

In this section I wish to discuss alternatives to the digital growth machine exemplifed above in terms of geofences and NFTs. If there is a *growth* model, is it possible to posit and develop a non-growth or degrowth model? There is a signifcant tradition of "slow x" including slow food, slow scholarship, slow cities, as well as slow, no, or even degrowth. There is also "doughnut economics" which similarly questions the need or the advisability of persisting with growth as a goal (Raworth, 2017a, b).

The stated purpose of these approaches varies but can include normative statements to the effect that society should value quality over quantity, or that society is moving too fast and consuming too many resources, leading to negative externalities such as global climate change, or negative wellbeing. Kitchin and Fraser (2020) for example argue that we need to adopt "slow computing" due to a societal obsession with social media and other forms of digital communication that can be unhealthy and addictive.

The slow movement does not advocate a rejection—the slow food movement does not seek to abstain from eating for example—but instead a form of "capital switching" in which investment is switched from a focus on newness and innovation to care and repair.

Here I propose a slow digital data movement around six principles.

**Principle 1** A Slow Data Economy should provide a counter narrative to extractive and destructive growth.

Deconstructing the power of innovation helps switch from valuing newness and innovation to care and repair of what already exists. The fetish around innovation sits at odds with the fact that value from innovation has benefted fewer people as it has increasingly been captured by elites, as described in the urban growth machine. Although today we are in the fourth industrial age marked by robots, automation and algorithms, breakthrough innovations seem few and far between. Apple's top product is arguably the iPhone, frst introduced 15 years ago in 2007 by Steve Jobs. Despite some 13 operating system revisions, it is not much different today. Such "innovation capture" where digital technology companies acquire competitors and seek rents via licenses of the technology is a key component of rentier capitalism and the establishment of monopolies (Christophers, 2020).

The slowdown in the rate of innovation is recognized by writers across the political spectrum. Peter Thiel often argues that the biggest problem today is stagnation and lack of acceleration—although in his case he advocates for speeding up. Vinsel and Russell (2020) as well the geographer Danny Dorling (2020) argue for a different kind of innovation, rather than assuming that all innovation produces a social good. True, innovation is still linked to value, but drawing on their work along with that of economists (see Kokkoris & Valletti, 2020) we can conceive of different forms of innovation: that which creates values for social good, that which destroys value (sometimes known as toxic innovation), that which extracts value, and the more recent development of responsible innovation.

It has often been noted that today's mega technology companies including Apple, Amazon, Meta/Facebook, and Google have practiced forms of extractive innovation. The argument against such powerful monopolies is that they create ineffciencies in the market; they command higher prices than in competitive markets, but also, they tend to suppress innovation. In the case of the big tech companies, one way this operates is that they remove potential competitors from the market by buying them up and absorbing them. For example, after the company Keyhole has developed a virtual earth viewer, Google bought the company and launched it as Google Earth (Crampton, 2008). Similarly, Amazon is often accused (and was sued for doing so) for killing off not only small bookshops, but also book chains such as Borders and Barnes & Noble. These practices are known as "kill zones" for obvious reasons that big tech kills off small startups. According to a 16-month US Congressional investigation report on digital markets, big tech was found to hold unwarranted monopoly power, and the investigators wrote that they found "signifcant evidence" of the suppression of innovation, and that this weakened democracy (United States Committee on the Judiciary, 2020). In digital mapping, for example, the investigation found that Google Maps (the market leader) was worth up to US\$60 billion for the company, and that its market dominance suppressed the ability of competitors to enter the market (United States Committee on the Judiciary, 2020, p. 108). The U.S. Department of Justice has launched several lawsuits against Google for violating antitrust (monopolistic) regulations under both the Trump and Biden administrations.

Vinsel and Russell (2020) argue that for these reasons, the value of innovations is overblown, and we should divert resources from them in favour of policies that promote repair, maintenance, and care for what we have, instead of building new creations. Although they do not put it this way, perhaps one way to view this is to promote innovation that creates social value, rather than extracts or destroys it. Social value in this sense may come about by maintaining and protecting what we have, rather than new innovations (although sustaining innovations may have a role to play in such sustaining activities). It is possible to detect a favor of this in projects such as the Green New Deal (GND), supported by progressives in the USA. The GND may be an example of "capital switching" formulated by the economic geographer David Harvey nearly 50 years ago, in which there is a massive switch in the "circuits of capital" from investment in the production of goods and services to investment in infrastructure (Harvey, 1978).

The late British Labour MP, Tony Benn, famously stated fve questions of power that we should ask:

What power have you got? Where did you get it from? In whose interests do you exercise it? To whom are you accountable? And how do we get rid of you? (Benn, 2001, col. 510)

This mantra should remind us where technological accountability should be exercised; both through an un-black boxing such as critical histories of technologies such as GIS and now GeoAI (a form of transparency) and accountability through for example algorithmic impact assessments (AIA). Developed in the US, Canada and the UK, the AIA is a risk-assessment mechanism that could also identify mitigating processes (Reisman, Schultz, Crawford, & Whittaker, 2018).

**Principle 2** A Slow Data Economy should be based on local, place-based approaches, and should not scale.

Locally based solutions that are co-developed with locals will be smaller in scale and consume less energy. For example, Newcastle's new building housing computer science, the Urban Science Building (USB) cost £60m but promised to use solar power (photovoltaic arrays) to generate 33,000 kWh/year. As a sensor-enabled building (reputedly containing over 4000 sensors) and tracking CCTV, it also promises to manage lighting and energy costs more effciently.

We also need to act and think local because of the vast amount of energy required to train machines. The computational power for general AI is staggering. Some 30 billion barrels of oil are produced a year, and a lot of it is used to power the cloud, data centers, and the IoT. Data centers make up nearly half the global carbon footprint of the tech industry (Dobbe & Whittacker, 2019). In response, big tech has taken steps to power data centers with renewables, and just as importantly, to be seen to be doing this via various metrics. In 2020 Microsoft announced a commitment to be carbon negative by 2030 (Microsoft, 2020).

More can be done to expose the environmental costs of AI and to move it towards "green AI" (Schwartz, Dodge, Smith, & Etzioni, 2020). Yet we also must be aware of greenwashing. Vicki Mayer (2021) has identifed the "aura" around data centers, or their imaginary—their sustainability, their job creation through multipliers, or their development of under-serviced regions outside the cities. Her feldwork looks at Google's huge new data center in Eemshaven, Netherlands, part of a €2.5 billion investment by the company in the country. She shows that in fact very few people work in the data centers and that they are not really designed for humans; oxygen is kept signifcantly lower than normal in order to act as a fre suppressant. The coalburning power station next door, which powers it, is artfully concealed in advertisements. Most of all however is the way data centers are kept unknowable; all workers sign non-disclosure agreements, the premises are highly securitized and cannot be toured, and many of the non-technical support laborers are held at arm's length via subcontracting on precarious contracts (Mayer, 2021).

Geographers may be particularly interested in Machine Learning (ML) that can use transfer learning to apply a trained model in one location, to another location. A use case would be disaster response, where a ML trained on imagery of building damage in one part of the world, can be used in another part of the world to perform the same task. Conceptually this might amount to training the last few layers of a deep learning model, leaving most layers trained on your original dataset (such as Imagenet). ArcGIS Pro has some tools that will allow this.

For this reason, locally designed AI/ML are preferable. As I discuss next, it is also a powerful democratic process if decision-making about places involves the communities themselves; a tradition in planning going back some decades (Wilson & Tewdwr-Jones, 2022). But how can local residents, who are not technically profcient in AI, co-design how the system might work?

**Principle 3** A Slow Data Economy should be inclusionary.

One process of accountability that has received attention lately is human-in-theloop (HITL) or its extension society-in-the-loop (SITL) (Rahwan, 2018) which refers to the inclusion of human participation in machine learning. It was frst proposed in the feld of controlled computer systems in the 1990s and more recently for AI. The human-in-the-loop I have in mind is exemplifed by recent work by Huck and colleagues (Huck, Perkins, Haworth, Moro, & Nirmalan, 2021). In their study of volunteered geographic information (VGI) they propose a novel method of combatting under-mapped areas that they dub "centaur GIS." This scheme integrates human and machine activities using feature recognition by machine learning, to propose geometries (shapes and locations of buildings, roads and other features in the environment) and feature classifcation (identifcations of which the approved geometries) which are then approved, edited, or rejected by a human participant. This hybrid approach (a centaur is a human-horse hybrid) they argue is superior to one without a human in the loop: essentially the machine learning proposes, and the human disposes, of each geometry and feature classifcation. One of the advantages of this approach is that it is scalable via VGI; if for example it were implemented in OpenStreetMap (OSM), editors around the world could approve, edit, or reject geometries and/or feature classifcations at scale.

The emphasis on this form of in-the-loop work is placed on understanding and meaning. In current AI, the hope is that meaning will emerge naturistically by scaling up—hence the community's excitement about large language models (LLMs) such as Open AI's ChatGPT which produces human-interpretable text given an input. Famously, LLMs have been described as stochastic parrots (Bender, Gebru, McMillan-Major, & Shmitchell, 2021)—repeating much but understanding little. Like a parrot, the machine learning model is without reference to meaning, and Bender et al. (2021) detail a number of risks and harms when the models are used in this way, while recognizing that in other use cases, such as automatic speech recognition, there may be utility in using smaller language models.

In a hybrid model the emergence of meaning is not left to the model but provided by the human, who has a vested stake in the process (e.g., a motivation to use OSM to provide more accessible transportation). This has non-trivial implications—it would put into contention the value of the autonomous vehicle (AV) industry for example, which rely on the model to infer and make judgements about objects in the scene on the currently existing road system (AVs traveling on dedicated lanes may be able to avoid this issue).

**Principle 4** Slow data economy should be auditable, accountable and transparent.

Responsible research and innovation (RRI) has been developed in order to more clearly understand harms and risks of technology. It was developed in the European Union around 2010 to inform its funding frameworks following the emergence of the human genome project (Owen, Macnaghten, & Stilgoe, 2012), and similar guidance has been established in the UK and U.S. funding contexts. Nevertheless, legislation by itself will likely prove inadequate as busy researchers will feel an imposed top-down solution rather than self-motivation to practice RRI. One way to address this is to make more mainstream the practice of algorithmic impact assessments (AIAs), which were recommended by the AI Now Institute (Reisman et al., 2018). AIAs provide a framework to ensure public accountability of automated decisionmaking systems. The framework can include peer review, public commentary, and due process for those affected by the systems. Transparency can be rather hard to pinpoint in a deep learning model with many variables, although explainable AI has made some attempts to address this including in GeoAI (Xing & Sieber, 2021). However, progress has been faced with barriers such as the fact that a GeoAI does not just depend on current conditions (e.g., traffc), but the local semantics of place meanings, or local regulations. Thus, the AI may be unable to a provide an account of its output.

Another way to think about accountability is through affective relations. Meredith Whittaker (2021) suggests that academics and tech industry allies need to organize and develop structures of mutual care. For me this has come about through contributions to establishing pedagogical materials and writings on critique, including holding public webinars on surveillance and geotech, and delivering RRI training to geospatial PhD students. Pedagogy is a form of making allies or in a slight twist of the term the "exit to community" (E2C). Although again not perfect, E2C is the proposition that innovation capture as an end-goal (having the startup bought out by monopolistic but deep-pocketed tech companies, often known as exit) can be replaced by co-creating, co-governing and co-owning (e.g., via trusts) assets for its community (Mannan & Schneider, 2021). There is also the Turing Way, a collaborative project on open research with over 300 contributors. Open research includes not just open access publication of results, but also the code, methods and data used to arrive at those results in order to make reproducibility too easy not to do (The Turing Way Community, 2022). The Turing Way is full of inspiring examples, case studies and discussion—a true pedagogical document.

#### **Principle 5** A Slow Data Economy should anticipate dual-use.

Responsible innovation and dual-use technologies. A dual-use technology is a technology that may fnd more than one purpose (especially a civilian and a military or law enforcement use). Perhaps all technologies are dual-use? Perhaps, but some alternative uses are arguably worse than others. Think of the humble kitchen knife for example, since time immemorial it has been used to threaten and harm people as well as slice bread or chop vegetables. For this reason, it is sometimes said that it is not possible to prevent nefarious uses of technology, or in milder form technology developers will acknowledge it is possible but not their responsibility (they are just engineers). Yet if you try to board a fight with even a Swiss Army knife or enter a government building with a wrist brace with a metal insert you will soon learn otherwise: it is possible to anticipate and regulate. Yet a knife in most cases can potentially harm only one person at a time. By contrast, accessing and using the vast treasure troves of personally identifable data online and using them for surveillance or machine learning can and does affect far more people—perhaps nearly all of us. This "platforming" of locational and biometric data not only promises to connect geographically distant actors but to curate new forms of value (Crampton, 2019) by for example collating data from multiple origins into a central database where it can be analytically combined with other data for purposes of decision-making. A 3-year report by the Ada Lovelace Institute across a number of use cases of biometric technologies in public space in the UK found threats to privacy and bias (Ada Lovelace Institute, 2022). Given that these technologies are largely unregulated, the Institute laid out proposed legislative recommendations, including the suspension of live facial recognition and better oversight that could anticipate harms. Perhaps most relevant to our discussion is the proposed standard of proportionality, that is, not a rush to deploy, but a slower, more considered approach: "this proportionality test should consider individual harms, collective harms and societal harms that may arise from the use of biometric technologies" (Ada Lovelace Institute, 2022, p. 55).

**Principle 6** A Slow Data Economy should vision the future, and develop critical histories.

One promising solution is to use a gaming approach, as practiced by UN Habitat using the popular Minecraft game (UN Habitat, 2021). UN Habitat is the custodian for Sustainable Development Goal 11, for sustainable cities and communities. Minecraft is a computer video game, which can be quickly taught to participants. Using a Minecraft model of the site to be visioned, participants can work on medium-grade computers to rebuild or try out new designs (the experience is rather like digital 3D Lego building blocks). Building the site can involve taking pictures of the area, working with Google Maps, or tracing the area. Participants can add or move blocks around in the site to visualize a possible future design (see Fig. 14.2).

Creating space for different imaginaries is critical especially when capital itself claims it is the only alternative ("capitalist realism" as captured in the phrase "it is easier to imagine the end of the world, than the end of capitalism" (Fisher, 2009, p. 2). Gaming in Minecraft is not a zero-sum outcome, there is no correct answer, and it stimulates play and experimentation. Future visioning has also been the

**Fig. 14.2** Image from Minecraft city visioning workshop for Conakry, Guinea. Source: Reprinted from UN Habitat (2021). Copyright by UN 2021. Reprinted with permission

province of science fction and science fantasy writers such as Kim Stanley Robinson (e.g., his novel *New York 2140* in which a near-future New York City has been fooded by a 50-foot rise in sea levels due to global warming), or John Brunner's classic 1972 environmental dystopia *The Sheep Look Up*.

We also need to learn from the past in order to understand the present (what Foucault called a genealogy of the present). We need rich histories of the present, especially critical histories of AI and GeoAI. Those histories may even contribute to a kind of counter-narrative, that makes space for problematizing hidden assumptions such as "legislation stifes innovation," or that innovation is a universal social good.

#### **Conclusions**

This chapter has examined developments in urban geospatial technologies under the perspective of what Rosen and Alvarez-León (2022) call the digital urban growth machine. As with the original growth machine, the digital manifestation is deeply dependent on material creation and extraction of value. Particularly important though are "renderings" or ways of operationalizing the creation and extraction of value. I argue that they do so under a rentier model, or more broadly a system of rentier capitalism, in which the primary defning feature is owning or controlling particularly assets, that is, having rather than doing (Christophers, 2020). Such ownership enables the creation of and monopolistic control of new digital markets for the generation and appropriation of value; both monetary and non-monetary. Akin to Marx's technological fx and David Harvey's spatial fx (Harvey, 1982), we can see this as a form of "digital fx."

The two domains discussed here, geofences/geoframing and cryptocurrency and NFTs on the blockchain, could be usefully extended. I argued here that as digital geographies operating to sustain rentier capitalism, they are productive of new forms of value. In the case of geofences they *produce new forms of subjectivity*; inasmuch as they concretise the governance relation between individuals and space. Activities within a geofence, whether established as a search zone by a law enforcement agency, or as a no-go area for an e-scooter (where the scooter will slow down or not operate at all) can be governed at the individual rather than the group level. If previously we consider governance applying to spatial units (such as political jurisdictions) we are now able to govern spaces with much more agility and at the level of the individual who enters or occupies them. Agile, because they can apply for short periods of time, and can even be moved along with the movement of problematic subjects. These geographical digital representations, in other words, serve to problematize occupants of both private and public spaces as dangerous or risky individuals. They thus form an ownership over all sorts of new spaces from which value can be extracted in rent form—the creation of value by dint of having rather than creating being the classic defnition of the rentier. Yet the societal impacts of geofences—who is making them, profting from them and especially who is impacted by them remain little studied.

The blockchain and its usage for cryptocurrencies and especially NFTs represent a rather more complex case; more clearly part of the rentier model but less reliant on digital geographic renderings. While there is a strong case to be made that cryptocurrencies offer a "digital fx" as an asset class for speculative capital to fow into, and that monopolistic control of such cryptocurrencies has been the modus operandi since their establishment (and therefore they again fall into the rentier model) it is the NFT market that has tended to more overtly exploit digital geography renderings. Earth2.io is one example, the "metaverse" is another. But it should be recalled that NFTs are deeply tied to cryptocurrencies as their name implies. Being nonfungible, they cannot be exchanged for another asset of the same type—they are unique. This uniqueness has to be secured and acknowledged when it comes to digital assets (for example a jpg image) because an identical copy can be made, but copies lack the entry on the blockchain that make it publicly verifable as the NFT asset. Furthermore, NFTs are designed to be bought with cryptocurrencies using cryptocurrency wallets, mostly because network or "gas" fees can be charged for each transaction by the marketplaces (that is, fees charged for the computational power to validate the transaction; additional transaction fees may also be charged). All these activities are possible because cryptocurrency and NFTs on the blockchain *produce a new politics of exit*. As Raymond Craib (2022) argues this exit is not new, but the "myth" that escape is possible (Bruggeman, 2022) through decentralization is an extremely useful one for extending the tendrils of the rentier economy into new "cloud countries" (Srinivasan, 2022).

These two domains can be extended, as Rosen and Alvarez León (2022) suggest in a footnote, to digital twins or realtime simulations of buildings and urban areas. Digital twins are often visualizations of such spaces, and as such are *productive of new territories*. These territories are made more governable through control of the sensors and devices that collect realtime data, are processed by optimization algorithms, and fed back with changes to its digital-material infrastructure. In the case of a building information model (BIM) for example, sensors may detect persons entering a room at a particular time and adjust the HVAC systems, heating or cooling the room. What a digital twin permits however, is predictive governance, heating or cooling the room in anticipation of its occupancy. A more complex model may simulate a whole city or even a region. In order to make predictions the models have to be parameterized, especially with population data or a proxy (usually and not necessarily correctly assumed to be growing).

What is perhaps the most surprising about these development however, is that it does not stand unchallenged, and an increasing number of responses, gathered under the banner of slowness are now making themselves heard. In this chapter I have been inspired by this braid of thinking to offer a few principles (by no means exhaustive) for urban geospatial technologies we might label the Slow Data Economy. I offered six principles, starting with counter-narratives to growth. One of the key tasks is to better understand innovation, and to offer another concept of innovation and regulation than the common one that regulation stifes innovation. Here I tried to break open innovation as not being a universal good by understanding different types of innovation; including innovation that extracts and innovation that destroys value. These forms of innovation do need to be stifed; extractive innovations are at the heart of the rentier model. Indeed where "rent-seeking" behavior is most pronounced, that is, where rentiers sit and sweat existing assets rather than innovate, it could be said that extractivism and rentier capitalism aptly demonstrates that innovations for social good such as those that spread their benefts are not just disfavored but actively suppressed. Legislation is clearly needed to rectify this imbalance, for instance by loosening intellectual property (IP) regimes, taxing corporate profts, and incentivizing investment in renewables.

Where algorithms and digital developments are local/non-scalable, inclusionary, and audited we can also provide a slower, more deliberate approach. If we can build in better understandings to anticipation and mitigate how technologies may be used, for example by producing critical histories of GIS, GeoAI, and geotechnologies we can create richer more inclusive visions for the future. These are undoubtedly inadequate by themselves if they are not part of a bigger movement to challenge the ideology of growth. But their possibilities offer a way to think that might yet be a radical response for our times.

#### **References**

Amster, H., & Diehl, B. (2022). Against geofences. *Stanford Law Review, 74*(2)*,* 385–445.

Ash, J., Kitchin, R., & Leszczynski, A. (2016). Digital turn, digital geographies? *Progress in Human Geography, 42*(1), 25–43. https://doi.org/10.1177/0309132516664800

Ada Lovelace Institute. (2022). *Countermeasures: The need for new legislation to govern biometric technologies in the UK*. Retrieved from https://www.adalovelaceinstitute.org/report/ countermeasures-biometric-technologies/


**Jeremy Crampton** is Professor of Urban Data Analysis at Newcastle University. He is a broadly trained interdisciplinary social scientist with a background in geography, mapping, Geographic Information Science, and geospatial technologies. For most of his career Jeremy has been interested in the socio-political and ethical aspects of geographic surveillance, spatial big data, and algorithmic governance, and he is passionate about working with the public to better understand the power and sensitivity of locational data. In this regard he was co-author on a White Paper to help develop policy frameworks for socially responsible location data and has co-edited two editions of leading encyclopedias in human geography. Jeremy is currently writing a book for the public on locational technologies and everyday life which will develop some of the discussion in this chapter.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

## **The Klaus Tschira Foundation**

The German foundation Klaus Tschira Stiftung supports natural sciences, mathematics and computer science and the appreciation of these subjects. It was founded in 1995 by physicist and SAP co-founder Klaus Tschira (1940–2015) by private means. Its three priorities are: education, research and science communication. This commitment begins in kindergarten and continues in schools, universities and research institutions throughout Germany. The foundation advocates the dialogue between science and society. Further information (in German) at: www.klaustschira-stiftung.de

The Klaus Tschira Foundation is located in Heidelberg and has its head offce in the Villa Bosch, once the residence of Carl Bosch, a Nobel laureate in chemistry (Figs. 1 and 2).

**Fig. 1** Participants of the symposium "Knowledge and Digital Technology" at the Studio Villa Bosch in Heidelberg, Germany. (© Johannes Glückler, Heidelberg)

**Fig. 2** Villa Bosch, the head offce of the Klaus Tschira Foundation, Heidelberg, Germany. (© Peter Meusburger, Heidelberg)

## **Index**

#### **A**

Academics, 2, 4, 52, 70, 85, 88, 91, 92, 153, 156, 172, 220, 271 Action citizen, 7, 181 civics, 181 collective (*see* Collective action) Activism, 6, 7, 153–166 Algorithmic inequalities, 95, 98 Alienation, 175–180, 182, 187 Application downstream, 81, 84, 85, 94–98 ArcGIS, 136, 137, 270 ArcView, 136 Artifcial intelligence centralized, 8, 239, 240 personal, 241, 245 Assets stocks, 204 Awareness, 7, 53, 74, 88, 90, 96, 97, 142, 163, 186

#### **B**

Banking, 217, 265, 266 Behavior, 5, 8, 47–59, 74, 75, 91, 215, 221, 225, 229, 234, 239–241, 245, 247, 251, 253, 275 Behavior support, 247 Benchmarking, 154, 164 Big data, 1, 8, 55–58, 79, 80, 82, 89, 114, 134, 185, 225, 232–234 Big data technology, 1, 8, 55–58, 79, 80, 82, 89, 134, 185, 225–234 Bitcoin, 203, 205, 206, 208, 209, 211, 212, 214–217, 264, 265

Blockchain, 88, 114, 203–221, 257–275 Blockchain-based decentralized business model (BDBM), 8, 203–221 Blockchain discourse, 204, 208, 210, 211, 220 Blockchain project, 211 Blockchain protocol, 205 Blockchain technology, 8, 9, 88, 203–206, 208–210, 213, 214, 218–221

#### **C**

California, 136 Cape Town, 153, 155, 156, 158–166 Capital, 3, 6, 70, 107, 111, 112, 114, 116–118, 120, 122, 123, 125, 126, 154, 162, 188, 191, 192, 258–260, 264, 266, 267, 269, 272, 274 Capitalism digital (*see* Digital capitalism) surveillance (*see* Surveillance capitalism) Care elderly (*see* Elderly care) health, 4, 23, 24, 30, 32, 206, 208, 210, 211, 218 social, 9, 24, 193, 268 Caregivers, 17, 18, 20, 21, 24, 25, 28, 30–33, 35 Care professionals, 17, 30, 31, 34, 36–38 Care robot, 4, 5 Care workers, 24–26 Case(s) profle(s), 66, 67 Census Bureau U.S., 133, 135–138, 146, 147 Central bank digital currencies (CBDC), 213, 218

© The Editor(s) (if applicable) and The Author(s) 2024 J. Glückler, R. Panitz (eds.), *Knowledge and Digital Technology*, Knowledge and Space 19, https://doi.org/10.1007/978-3-031-39101-9

Centralized, 8, 10, 136, 204, 205, 210, 211, 213, 215–217, 221, 239, 240, 242, 243, 250, 265 City European, 6, 114–117, 121, 122, 125, 126 smart, 6, 153, 156–158, 163, 169–173, 179, 258 university, 6, 109, 114, 119–127, 170 Civil society, 7, 139, 155, 156, 158 Climate change, 79, 170, 173, 174, 194, 265, 267 Cluster industrial, 109, 116 Code, 66, 134, 139, 142, 144, 209, 219, 220, 251, 260, 271 Collective action, 36, 176 Communities of practice, 98 Connecticut, 136 Consumer, 5, 63–75, 111, 234 Copyright, 6, 65, 69, 71, 73, 134, 139, 144, 147, 230, 273 Copyright Law, 134, 147 Covid-19, 7, 97, 159, 160, 166, 180 Cryptocurrencies, 9, 148, 203–206, 208, 209, 211–213, 215–217, 219, 221, 257, 263–267, 274 Cryptocurrency community, 212 Cryptographic, 205, 212, 213 Cue(s), 64–68, 70, 72–74

Customer(s), 2, 6, 8, 70, 204–206, 210, 213–221, 240, 247, 248, 260 Cyborg activism, 7, 153–166

#### **D**

Data management, 8, 204, 209–211, 213, 218 monitoring, 156, 170 normative, 4, 138, 154 performance (*see* Performance data) personal, 8, 10, 81, 89, 171, 239–255 storage, 205 urban (*see* Urban data) Databases, 6, 57, 112, 115, 134–138, 186, 204, 206, 207, 210, 260, 262, 265, 272 Datafcation, 5, 9, 79–98 Data science (DS), 5, 47, 48, 50, 51, 53, 58, 59, 96, 98 Datasets, 67, 137, 138, 140, 142, 145, 270 Decentralization infrastructural, 211, 213, 219 institutional, 211, 213, 219 Decentralized, 8, 159, 160, 203–221, 241, 243, 253, 254, 266

Decentralized autonomous organizations (DAOs), 204, 206, 209, 212, 214, 221 Decentralized management, 8, 208, 241–244, 254, 255 Decentralizing, 204–206, 211, 213, 220, 241, 242 Decision(s) consumer, 63, 64 support, 5, 48–50, 53, 59, 64–66, 69, 74, 181, 216 Decision cues, 65 Decision-maker(s), 21, 29, 31, 33, 37, 39, 48–50, 63, 64, 74, 154, 174, 227, 228 Decision-making algorithmic-support, 48, 50 consumer, 64, 65 informed, 5, 64, 68 Decision option(s), 64, 68 Decision tree(s), 5, 64–74 Demographic challenge, 17 Description, 65, 67, 226, 227 Description-experience gap, 226, 227 Detection probability, 232 Determinism, 97, 153 Development, 1–6, 8, 47–50, 56, 58, 64–68, 70, 72, 83, 85, 88–92, 94, 96, 108, 111, 116, 117, 126, 131, 134–139, 144, 145, 147, 148, 154, 157, 162, 163, 170, 171, 181, 182, 188, 191, 204, 218, 249, 250, 254, 268, 269, 272, 273, 275 Digital, 1–10, 19, 20, 47, 54–56, 63, 64, 79–83, 86, 90, 94, 97, 111, 112, 114, 131, 132, 148, 153, 154, 159–161, 164–166, 170, 173, 185–194, 204, 205, 208, 209, 211–213, 216–221, 239, 240, 247, 248, 254, 257–275, 282 Digital applications, 205 Digital capitalism, 188, 194 Digital economy, 3, 79, 90, 131, 132, 148, 258 Digital era, 81, 82, 84–90, 94, 97 Digital goods, 132 Digital growth machine, 259, 264, 267 Digital labor, 6, 7, 86, 185–194 Digital Leninism, 239, 240 Digital services, 221 Digital tools, 153, 154, 166 Digital transaction, 205 Discourse, 75, 80, 81, 84, 87, 93, 96, 153, 155, 156, 158, 159, 161–166, 175, 180, 208, 210, 214, 219, 220 Discrimination, 178 Disintermediating, 209, 213, 220, 221 Disintermediation, 210, 211, 219 Disparities territorial, 126

#### **E**

Economic geography, 79, 131, 132, 263 Economy attention, 96, 109, 131, 239, 263 digital (*see* Digital economy) Ecosystem(s) business, 4, 6, 111, 113 care robotics, 28, 29, 32 entrepreneurial, 3, 6, 108, 109, 111, 113, 116, 126 innovation (*see* Innovation ecosystem(s)) Edtech, 5, 80, 81, 84, 87–90, 95, 97 Education postsecondary, 85, 97, 98 Effectiveness, 17, 67, 69, 71, 73, 74, 85, 208, 225, 226, 232 Effcacy, 68, 71, 73, 74, 159, 165 Elderly care, 4, 17, 19–21, 23–25, 28, 33 Elderly care sector, 17 Enforcement, 8, 192, 225–234, 260, 262, 272, 274 Enforcement frequency, 229, 231 Enforcers, 225, 226, 230 Engineering, 4, 59, 95, 153, 207 Enterprise Ethereum, 213 Entrepreneur, 84, 107, 108, 111, 258 Environment(s) decision making, 63–75 decisions, 63–75 Ethereum, 203–206, 208, 212, 213, 264, 266 Ethical, 4, 9, 10, 17, 21, 27, 30, 31, 83, 96, 97, 169 Ethical problems, 4, 21, 97 European Union, 17, 138, 148, 271 Events high probability, 226, 232 low probability, 226 rare, 226, 227 Experience(s) past, 227, 229, 231

#### **F**

Facebook, 7, 70, 72, 112, 160, 164, 165, 170, 174–176, 178, 181, 188, 193, 204, 217, 240, 268

Fast-and-frugal decision trees (FFTs), 5, 64, 65, 67, 74 Federal Geographic Data Committee, 137 Federal Government, 133, 134, 136, 139–142, 146, 147 Finland, 107 Finnish, 107 Formats news, 72–73 opinions, 72 Founder student, 121, 123, 125 Fourth industrial revolution, 114 Framework relational, 191 Friction, 7, 169–182

#### **G**

Geofences, 9, 257–275 Geoframing, 257, 260–263, 274 Geographers, 131, 132, 189–192, 257, 262, 263, 268–270 Geographical, 3, 6, 9, 57, 107, 109, 131, 175, 189–191, 259, 260, 263, 265, 266, 274 Geographically, 131, 265, 272 Geographic information, 6, 132–147, 270 Geographic information markets, 6, 131–148 Geographic information systems (GIS), 136–138, 141, 169, 260, 269, 270, 275 Geographies conjunctural, 190 digital, 1, 4, 9, 185, 257–275 economic (*see* Economic geography) relational, 192, 193 Geospatial market, 139 Geoweb, 132 Germany, 68, 70, 74, 281, 282 Globalization economic, 132 Global South, 153, 154, 157, 193 Google Earth, 268 Google Maps, 145, 221, 268, 272 Governance data-based, 171, 176–178 decentralized, 8, 10, 205, 210, 214, 250, 253, 254, 266 participatory, 173, 175, 181 smart, 7, 82, 170, 172–176, 178, 181, 266 urban (*see* Urban governance) Government, 6, 7, 82, 91, 98, 131, 133–147, 159, 162–164, 166, 169–172, 174–181, 209, 213, 239, 241, 250, 253, 260, 262, 272

GPS, 145, 262 Graph documents, 8, 241, 251–254 Green New Deal (GND), 269

#### **H**

Human-in-the-loop (HITL), 270

#### **I**

Industries automobile, 210 energy, 203, 204, 208 healthcare, 204 Industry 4.0, 208 Information digital, 47, 54, 55, 67, 68, 72, 83, 133, 142, 147, 148, 155, 158, 274 evidence-based, 47, 68, 69 health, 67–69, 74, 142 markets, 131–133, 135, 147, 148 personal, 81, 148, 254 Innovation(s) digital, 3, 6, 9, 80, 94, 111–113, 216 open, 108, 109, 111, 112, 271, 275 Innovation ecosystem(s), 18, 28–32, 34, 35 Institutions education, 5, 30, 80, 89–92, 94, 95, 98 Intellectual property (IP), 4, 6, 112, 131–133, 147, 262, 275 Interoperability legal, 6, 131–135, 147 technical, 6, 131, 133–139 Interpretation, 3, 5, 20, 52, 58, 68, 70, 72, 74, 158, 166, 225, 258 Intuitive classifer, 227–229 Intuitive classifer explanation, 229 Intuitive classifer hypothesis, 229 Investment options digital, 70

#### **K**

Kansas, 135 Knowledge building, 18, 21, 22, 25, 27, 30, 35, 37 contextual, 5, 94, 97 ecosystem, 32 production, 79–98, 110, 114, 155, 158, 191 sharing, 19

#### **L**

Labor digital (*see* Digital labor) immaterial, 7, 186, 190, 194 market, 85, 88, 187, 192 Large language model (LLM), 2, 251, 270 Law(s) contracts, 134 copyright (*see* Copyright Law) Litecoin, 203 Locally available talent, 114, 121, 126 London, 7, 117–119, 121, 122, 124, 125, 170, 174, 176, 179, 180, 265 Los Angeles, 136 Low-traffc neighbourhoods (LTNs), 7, 170, 173–182

#### **M**

Machine learning (ML), 1–3, 47, 67, 89, 188, 193, 208, 228, 254, 270, 272 Mainstream, 158, 204–206, 210, 214–221, 271 Map topographic, 134, 142 Marginalized, 82, 83, 91, 97, 154, 159 Market(s) labor (*see* Labor market) Marketplace(s), 8, 204, 209, 212, 219, 260, 262, 274 Metaverse, 263–267, 274 Migration, 72, 98 Mining, 190, 205, 265, 266 Mobilization, 7, 155, 157–159, 161, 163, 166, 172 Monitoring data (*see* Data monitoring) Monopolies, 125, 210, 264, 268 Monopolistic, 268, 271, 273, 274 Monopolized, 210 Multi-level perspective, 22, 33–35 Municipalities, 29, 135, 145, 154, 161

#### **N**

NASA, 142 National Interoperability Framework Observatory of the European Commission, 134 National Spatial Data Infrastructure (NSDI), 133, 135, 137–139, 147

National States Geographic Information Council, 135 Network(s) digital, 3, 7, 132, 140 global, 109, 205 social, 155 News, 5, 54, 64, 65, 67, 72–73, 156, 161, 176, 240 Non-fungible-token (NFT), 216, 264–266, 274

#### **O**

Organizations, 1, 3, 4, 6–8, 18–21, 28–32, 38, 47, 49, 59, 63, 74, 95, 107–109, 112, 113, 134, 135, 138, 142, 155, 156, 160, 165, 176, 180, 204, 209, 212, 253 Orientation process, 36, 39

#### **P**

Pandemic, 7, 97, 156, 161, 164, 166, 243 Paradigms, 4, 80, 81, 87, 132, 154, 214, 215, 219 Participation civics, 169, 171, 172 democratic, 173, 181 Participatory mapping, 177 Pedagogy new, 5, 80, 81, 84, 90, 91, 94, 95, 97, 98 Peer-to-peer, 203–205, 208, 210, 211, 213, 215, 217, 221 Performance data, 114 Personalized identifable information (PII), 232–234 Personal life repository, 8, 243–244 Platform urbanism, 156, 190, 258 Polarization, 6, 7, 79, 80, 96, 98, 153 Policy affective, 180 Politics, 7, 153–155, 158–159, 174, 176, 182, 191, 193, 257, 259 Politics of exit, 263, 266, 267, 274 Pollution, 170, 173, 174, 179, 180, 258 Privacy concerns, 234, 241 invasion of, 226, 232 Private, 6, 8, 10, 85–87, 96, 134, 137, 138, 140–142, 144, 145, 147, 155, 157, 161, 162, 175, 206, 219, 234, 240, 244, 246, 250, 262, 274, 281 Privately owned public space (POPS), 258 Production upstream, 81, 98

Profle(s) cue, 66 Public, 2, 6, 10, 18, 24, 25, 29–31, 36, 56, 57, 74, 86, 134, 135, 139–142, 144, 146, 147, 155–166, 170, 173, 174, 178, 181, 182, 206, 210, 218, 225, 230, 232–234, 240, 242, 243, 250, 253, 254, 258, 271, 272, 274 Punishment expected, 230 gentle, 229–231 human, 225 severe, 226, 229, 230, 234 severity of, 230

#### **R**

Relation(s), 3, 4, 7, 9, 55, 56, 80, 87, 94, 97, 108, 112, 125, 127, 132, 133, 140, 157, 158, 160, 164, 176, 179, 186–188, 190–192, 194, 251, 258, 266, 271, 274 Relationality, 80, 186, 191–193 Rendering, 84, 93, 94, 172, 182, 258–260, 263, 264, 273, 274 Rentier, 258, 259, 262, 263, 267, 268, 273–275 Rentier model, 258, 262, 273–275 Resource, 4, 6, 7, 63, 66, 117, 135–137, 139–142, 156, 160, 172, 190, 241, 265, 267, 268 Re-spatialization, 194 RisikoAtlas, 64 Robot care (*see* Care robot) Zora, 23–25, 29, 34 Robots and the Future of Welfare Services (ROSE), 22, 40 Rule enforcement gentle, 226, 229–234 policy, 230, 232

#### **S**

Scaleup, 6, 107–127 Scaleup funding, 117 Scaling, 6, 94, 124–126, 270 Security, 8, 70, 139, 144, 187, 203, 204, 208, 213, 214, 216, 220, 242, 243, 246, 254, 264 Selection information, 63, 65 statistical, 66, 68, 70, 72 Self-sovereignty, 214, 216, 219–221

Sensitivity human, 58, 225 Sensors, 54, 81, 83, 169, 190, 232–234, 258, 269, 275 Shift, 7, 9, 84, 85, 111–114, 133, 139, 155, 156, 163, 166, 170, 174, 175, 181, 214–219, 250, 262 Situated knowledge, 98, 159 Situated learning, 98 Slow data economy, 9, 267–273, 275 Slow x, 267 Small samples, 227–229, 231–233 Smartness, 82, 169–173 Social media, 3, 54, 56, 58, 67, 72, 81, 82, 95, 140, 154, 155, 157, 160, 162, 163, 166, 185, 186, 189, 192–194, 204, 267 Society-in-the-loop (SITL), 270 South Africa, 153, 154, 156, 157, 159, 161, 164 Space Euclidean, 186, 189, 191, 194 political, 81, 98, 164, 176, 180, 181, 191 relational, 7, 164, 186, 190 Startup, 87, 107–109, 111–118, 121, 122, 124–127, 263, 268, 271 State control, 218, 266 disintegration of, 98 subsidies, 98, 109 welfare, 98 Status cue(s), 66, 74 Strategic Research Council (SRC), 22, 40 Strategies, 7, 64, 86, 87, 94, 98, 142, 145, 146, 153–155, 157, 158, 161, 163, 170, 172, 173, 175, 177, 181, 187–188, 191, 228 Structure, 3–10, 64, 65, 85, 91, 113, 136, 159, 170, 181, 188, 205, 210, 211, 220, 226, 241, 251, 271 Supreme Court, 134, 135, 179, 260, 262 Surveillance capitalism, 239, 240 Surveillance system(s) video, 232 Sweden, 22, 27, 33 Swedish, 32 Systems, 3, 5, 8, 48–50, 54, 55, 64, 82, 84, 89, 91–93, 95–97, 117, 126, 127, 133–135, 142, 153, 154, 159, 164, 165, 169–173, 186, 188, 190, 193, 194, 205, 208–213, 216–218, 220, 232, 234, 239, 244, 249,

#### 253, 254, 259, 263–268, 270, 271,

273, 275

#### **T**

Tech mediated, 88, 95–97 Technical, 3, 6, 95, 131–135, 137–139, 144, 147, 162, 164, 171, 204, 210, 211, 216, 218–220 Technological, 1, 3–6, 8–10, 19, 29, 33, 35, 49, 81, 90, 108, 109, 113, 114, 132, 136, 140, 145, 153, 157, 166, 169, 170, 172, 182, 215, 221, 231, 263, 269, 273 Technology blockchain (*see* Blockchain technology) digital, 2–9, 111, 132, 140, 153, 157, 170, 186, 188, 204–206, 209, 214, 219, 258, 259, 263, 268, 273, 275 health, 4, 182 needs, 20 robot, 9, 188, 267 welfare, 4 Thinking critical, 79, 83, 95, 97 relational, 94, 190–192, 194 technocratic, 80, 83, 97 TIGER fle format, 133 TIGER/LINE, 137, 144, 145 Tools, 2, 5, 47, 48, 50, 52, 53, 58, 64–66, 74, 96, 131, 155, 157, 159, 164, 166, 170, 177, 270 Topologically integrated geographic encoding and referencing (TIGER), 135–138, 147 Tracking, 81, 206, 213, 217, 269 Trading, 48, 58, 211, 215–217, 265, 266 Traffc, 7, 55, 141, 169, 170, 173–174, 178–180, 229, 239, 258, 271 Transparency, 70, 75, 82, 203, 208, 209, 213, 217, 218, 220, 269, 271 Tree consumer, 64, 68, 70, 71 decision (*see* Decision tree(s)) Trump, 111, 145, 240, 262, 268 Trust, 4, 159–165, 169–182, 205, 216, 253, 265, 266, 271

#### **U**

Uncertainty, 2, 5, 63–75, 113


Urbanism digital, 157, 257 marked led, 164

Index

platform (*see* Platform urbanism) southern, 154 U.S., 6, 131–147, 240, 268, 271 U.S. Geological Survey (USGS), 140–142, 144–146

#### **V**

Violation(s) behavior(s), 226, 230–232, 234

#### **W**

Wallet cold, 217 providers, 204, 213, 216 storage, 217 Website(s), 2, 57, 64, 68, 72, 193, 219 Wellsprings, 90–94 Work digital, 7, 97, 135, 259, 272 Workplace, 3, 94–96, 186, 189, 193